Jan 26 14:09:41 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 14:09:41 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:41 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:09:42 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 14:09:42 crc kubenswrapper[4922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:09:42 crc kubenswrapper[4922]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 14:09:42 crc kubenswrapper[4922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:09:42 crc kubenswrapper[4922]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:09:42 crc kubenswrapper[4922]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 14:09:42 crc kubenswrapper[4922]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.906288 4922 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909892 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909908 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909915 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909920 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909926 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909933 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909938 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909943 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909948 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909952 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909958 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909963 4922 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909968 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909972 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909978 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909983 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909988 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909993 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.909998 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910004 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910009 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910013 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910019 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910024 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910029 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910041 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910046 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910053 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910060 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910085 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910091 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910097 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910102 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910108 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910113 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910118 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910123 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910128 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910133 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910139 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910144 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910149 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910154 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910159 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910164 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910168 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910173 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910177 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910181 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910186 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910191 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910195 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910200 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910204 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910209 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910213 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910218 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910223 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910230 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910235 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910241 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910247 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910253 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910258 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910263 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910268 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910273 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910279 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910284 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910289 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.910294 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910604 4922 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910620 4922 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910630 4922 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910638 4922 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910646 4922 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910652 4922 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910660 4922 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910667 4922 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910672 4922 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910678 4922 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910684 4922 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910690 4922 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910696 4922 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910701 4922 flags.go:64] FLAG: --cgroup-root="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910707 4922 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910713 4922 flags.go:64] FLAG: --client-ca-file="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910718 4922 flags.go:64] FLAG: --cloud-config="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910723 4922 flags.go:64] FLAG: --cloud-provider="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910729 4922 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910739 4922 flags.go:64] FLAG: --cluster-domain="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910744 4922 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910751 4922 flags.go:64] FLAG: --config-dir="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910756 4922 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910763 4922 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910772 4922 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910777 4922 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910783 4922 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910789 4922 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910796 4922 flags.go:64] FLAG: --contention-profiling="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910802 4922 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910809 4922 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910815 4922 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910820 4922 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910828 4922 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910834 4922 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910839 4922 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910845 4922 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910852 4922 flags.go:64] FLAG: --enable-server="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910857 4922 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910865 4922 flags.go:64] FLAG: --event-burst="100" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910871 4922 flags.go:64] FLAG: --event-qps="50" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910876 4922 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910882 4922 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910887 4922 flags.go:64] FLAG: --eviction-hard="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910894 4922 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910900 4922 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910905 4922 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910911 4922 flags.go:64] FLAG: --eviction-soft="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910916 4922 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910921 4922 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910926 4922 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910932 4922 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910937 4922 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910943 4922 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910948 4922 flags.go:64] FLAG: --feature-gates="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910955 4922 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910961 4922 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910966 4922 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910971 4922 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910977 4922 flags.go:64] FLAG: --healthz-port="10248" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910983 4922 flags.go:64] FLAG: --help="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910989 4922 flags.go:64] FLAG: --hostname-override="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.910996 4922 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911002 4922 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911008 4922 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911013 4922 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911018 4922 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911023 4922 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911029 4922 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911034 4922 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911040 4922 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911045 4922 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911051 4922 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911057 4922 flags.go:64] FLAG: --kube-reserved="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911085 4922 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911091 4922 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911096 4922 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911102 4922 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911107 4922 flags.go:64] FLAG: --lock-file="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911112 4922 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911118 4922 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911123 4922 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911132 4922 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911137 4922 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911143 4922 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911148 4922 flags.go:64] FLAG: --logging-format="text" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911154 4922 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911161 4922 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911166 4922 flags.go:64] FLAG: --manifest-url="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911172 4922 flags.go:64] FLAG: --manifest-url-header="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911179 4922 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911184 4922 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911193 4922 flags.go:64] FLAG: --max-pods="110" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911198 4922 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911204 4922 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911210 4922 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911215 4922 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911221 4922 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911227 4922 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911232 4922 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911247 4922 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911252 4922 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911258 4922 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911264 4922 flags.go:64] FLAG: --pod-cidr="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911269 4922 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911278 4922 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911284 4922 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911289 4922 flags.go:64] FLAG: --pods-per-core="0" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911295 4922 flags.go:64] FLAG: --port="10250" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911302 4922 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911308 4922 flags.go:64] FLAG: --provider-id="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911314 4922 flags.go:64] FLAG: --qos-reserved="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911319 4922 flags.go:64] FLAG: --read-only-port="10255" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911325 4922 flags.go:64] FLAG: --register-node="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911331 4922 flags.go:64] FLAG: --register-schedulable="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911337 4922 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911347 4922 flags.go:64] FLAG: --registry-burst="10" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911352 4922 flags.go:64] FLAG: --registry-qps="5" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911359 4922 flags.go:64] FLAG: --reserved-cpus="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911365 4922 flags.go:64] FLAG: --reserved-memory="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911373 4922 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911378 4922 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911384 4922 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911390 4922 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911395 4922 flags.go:64] FLAG: --runonce="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911401 4922 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911407 4922 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911413 4922 flags.go:64] FLAG: --seccomp-default="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911418 4922 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911424 4922 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911429 4922 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911435 4922 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911441 4922 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911446 4922 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911452 4922 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911457 4922 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911463 4922 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911468 4922 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911474 4922 flags.go:64] FLAG: --system-cgroups="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911479 4922 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911487 4922 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911493 4922 flags.go:64] FLAG: --tls-cert-file="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911498 4922 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911506 4922 flags.go:64] FLAG: --tls-min-version="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911511 4922 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911517 4922 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911523 4922 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911528 4922 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911533 4922 flags.go:64] FLAG: --v="2" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911542 4922 flags.go:64] FLAG: --version="false" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911549 4922 flags.go:64] FLAG: --vmodule="" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911558 4922 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.911564 4922 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911716 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911725 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911731 4922 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911737 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911743 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911749 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911754 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911760 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911765 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911770 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911776 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911781 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911786 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911791 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911796 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911801 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911807 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911811 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911816 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911821 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911826 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911831 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911835 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911840 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911845 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911849 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911854 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911858 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911864 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911869 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911875 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911880 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911885 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911890 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911894 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911900 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911904 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911909 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911914 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911918 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911925 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911931 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911937 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911942 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911946 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911951 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911956 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911961 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911966 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911970 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911975 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911981 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911987 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911991 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.911996 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912002 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912008 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912013 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912018 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912023 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912028 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912033 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912040 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912047 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912053 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912058 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912080 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912094 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912099 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912104 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.912109 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.912117 4922 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.921525 4922 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.921588 4922 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921743 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921768 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921779 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921791 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921800 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921809 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921818 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921828 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921837 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921848 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921858 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921870 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921881 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921891 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921900 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921908 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921917 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921925 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921934 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921945 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921954 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921962 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921971 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921980 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921988 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.921997 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922005 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922015 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922024 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922035 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922044 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922053 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922067 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922101 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922109 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922118 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922127 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922140 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922154 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922165 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922175 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922184 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922193 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922202 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922212 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922223 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922236 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922247 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922257 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922266 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922275 4922 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922284 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922292 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922301 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922310 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922320 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922328 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922336 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922345 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922354 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922362 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922371 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922380 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922388 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922396 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922405 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922414 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922422 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922430 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922439 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922448 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.922463 4922 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922729 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922745 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922755 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922902 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922915 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922925 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922935 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922944 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922955 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.922966 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923002 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923011 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923021 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923031 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923039 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923048 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923056 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923097 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923106 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923117 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923125 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923134 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923142 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923151 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923160 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923168 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923177 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923186 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923195 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923204 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923213 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923222 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923230 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923238 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923247 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923256 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923264 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923273 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923281 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923290 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923299 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923308 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923319 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923329 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923338 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923346 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923355 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923363 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923371 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923383 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923395 4922 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923404 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923414 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923423 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923432 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923443 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923451 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923460 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923468 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923476 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923485 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923496 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923508 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923518 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923529 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923541 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923552 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923562 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923571 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923582 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:09:42 crc kubenswrapper[4922]: W0126 14:09:42.923592 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.923604 4922 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.924138 4922 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.928879 4922 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.929117 4922 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.929869 4922 server.go:997] "Starting client certificate rotation" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.929922 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.930189 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-28 06:35:32.784195108 +0000 UTC Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.930321 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.937872 4922 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 14:09:42 crc kubenswrapper[4922]: E0126 14:09:42.940061 4922 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.941464 4922 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.953483 4922 log.go:25] "Validated CRI v1 runtime API" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.976365 4922 log.go:25] "Validated CRI v1 image API" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.978553 4922 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.981914 4922 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-14-04-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 14:09:42 crc kubenswrapper[4922]: I0126 14:09:42.981965 4922 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.010693 4922 manager.go:217] Machine: {Timestamp:2026-01-26 14:09:43.008510985 +0000 UTC m=+0.210773827 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e5a8e8c1-3ae9-423e-89aa-88a14e24c694 BootID:d465894b-675b-4495-9485-a609c23a81b4 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:36:be:43 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:36:be:43 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0e:09:8f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:47:f7:3a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:f4:1f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:95:ef:cd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:b5:b4:2b:e9:95 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:d9:02:97:ee:b1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.011169 4922 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.011430 4922 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.012528 4922 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.012937 4922 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.013020 4922 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.013541 4922 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.013564 4922 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.013895 4922 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.013953 4922 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.014331 4922 state_mem.go:36] "Initialized new in-memory state store" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.015197 4922 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.016133 4922 kubelet.go:418] "Attempting to sync node with API server" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.016173 4922 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.016221 4922 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.016251 4922 kubelet.go:324] "Adding apiserver pod source" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.016276 4922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.018660 4922 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.019203 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.019263 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.019327 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.019359 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.019380 4922 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.020525 4922 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021382 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021437 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021459 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021474 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021502 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021520 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021538 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021565 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021586 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021602 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021664 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.021683 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.022003 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.023464 4922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.023494 4922 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.024089 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.024187 4922 server.go:1280] "Started kubelet" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.026620 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 14:09:43 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.026679 4922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.026866 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:25:39.16006976 +0000 UTC Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.027167 4922 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.027251 4922 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.027297 4922 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.027995 4922 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.034189 4922 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.035272 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.035498 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.035682 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.035634 4922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e4d3287a768af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:09:43.022782639 +0000 UTC m=+0.225045451,LastTimestamp:2026-01-26 14:09:43.022782639 +0000 UTC m=+0.225045451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.036596 4922 factory.go:55] Registering systemd factory Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.036675 4922 factory.go:221] Registration of the systemd container factory successfully Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.036651 4922 server.go:460] "Adding debug handlers to kubelet server" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.037348 4922 factory.go:153] Registering CRI-O factory Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.037397 4922 factory.go:221] Registration of the crio container factory successfully Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.037499 4922 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.037533 4922 factory.go:103] Registering Raw factory Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.037559 4922 manager.go:1196] Started watching for new ooms in manager Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.038581 4922 manager.go:319] Starting recovery of all containers Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.044862 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.044913 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046193 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046272 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046327 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046355 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046438 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046479 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046511 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046551 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046579 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046613 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046642 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046682 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046708 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046740 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046761 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046785 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046816 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046840 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046879 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046904 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046930 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046961 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.046989 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047037 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047113 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047164 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047190 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047225 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047253 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047283 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047307 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047333 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047380 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047404 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047439 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047464 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047488 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047520 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047544 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047579 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047612 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047637 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047679 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047713 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047756 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047786 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047818 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047862 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047906 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047947 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.047987 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048044 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048138 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048175 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048222 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048253 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048294 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048334 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048390 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048427 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048452 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048492 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048532 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048578 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048611 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048634 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048667 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048691 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048715 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048761 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048866 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048916 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.048960 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049022 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049118 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049164 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049210 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049242 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049268 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049304 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049329 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049366 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049389 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049411 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049445 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049471 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049503 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049700 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049784 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049809 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049838 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049859 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049882 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049903 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049920 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049942 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049960 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.049981 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050002 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050021 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050044 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050082 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050397 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050497 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050551 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050590 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050614 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050639 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050665 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050683 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050709 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050728 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050756 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050770 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050791 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050807 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050824 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050848 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050862 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050881 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050897 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050913 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050933 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050948 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050969 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050983 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.050997 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051014 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051029 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051046 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051064 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051096 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051110 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051121 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051178 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051193 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051208 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051222 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051234 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051252 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051265 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051277 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051292 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051303 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051320 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051333 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.051989 4922 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052016 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052038 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052051 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052084 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052101 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052113 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052129 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052141 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052151 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052172 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052185 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052200 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052218 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052233 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052256 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052270 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052287 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052301 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052316 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052330 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052344 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052358 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052372 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052384 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052404 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052420 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052432 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052449 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052462 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052477 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052489 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052503 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052524 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052535 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052551 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052564 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052580 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052595 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052608 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052622 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052638 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052651 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052670 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052682 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052696 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052709 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052731 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052746 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052758 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052773 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052786 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052798 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052814 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052824 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052838 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052895 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052909 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052923 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052934 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052944 4922 reconstruct.go:97] "Volume reconstruction finished" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.052952 4922 reconciler.go:26] "Reconciler: start to sync state" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.067298 4922 manager.go:324] Recovery completed Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.079247 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.083563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.083623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.083640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.085272 4922 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.085313 4922 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.085350 4922 state_mem.go:36] "Initialized new in-memory state store" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.089266 4922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.091104 4922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.091144 4922 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.091175 4922 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.091220 4922 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.120201 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.120303 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.126905 4922 policy_none.go:49] "None policy: Start" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.128504 4922 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.128543 4922 state_mem.go:35] "Initializing new in-memory state store" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.134812 4922 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.189850 4922 manager.go:334] "Starting Device Plugin manager" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.189923 4922 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.189943 4922 server.go:79] "Starting device plugin registration server" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.191394 4922 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.191694 4922 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.191737 4922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.192030 4922 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.192234 4922 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.192256 4922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.204429 4922 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.237531 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.292513 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.294667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.294730 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.294751 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.294802 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.295777 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.391688 4922 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.391920 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.394481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.394539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.394557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.394765 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.395842 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.395943 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.396835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.396898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.396922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.397262 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.397732 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.397835 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.398025 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.398050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.398095 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400367 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400386 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.400806 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.401110 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.401169 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.402723 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.402778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.402795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.403429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.403497 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.403518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.403610 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.403760 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.403793 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.412242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.412318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.412359 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.412559 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.412618 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.412640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.413026 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.413128 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.414422 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.414484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.414501 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457733 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457795 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457846 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457870 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457892 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457919 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457945 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457966 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.457985 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.458006 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.458024 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.458043 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.458095 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.458113 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.496231 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.497530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.497567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.497575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.497597 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.498017 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560394 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560456 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560482 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560500 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560520 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560536 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560556 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560578 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560595 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560626 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560646 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560685 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560702 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560719 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560735 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560795 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.560971 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561016 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561050 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561103 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561128 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561092 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561162 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561183 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561203 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561226 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561353 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.561000 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.639203 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.734633 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.741241 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.764369 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6d35a65bc250ba1ccd316c3819146ec799c16c3ca88a1536de769edf23daea23 WatchSource:0}: Error finding container 6d35a65bc250ba1ccd316c3819146ec799c16c3ca88a1536de769edf23daea23: Status 404 returned error can't find the container with id 6d35a65bc250ba1ccd316c3819146ec799c16c3ca88a1536de769edf23daea23 Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.765883 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.767512 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7a03bc493cfa0e9c9ca24fb8c28b6614789ff887fc6533a5b0b303215a5a8f32 WatchSource:0}: Error finding container 7a03bc493cfa0e9c9ca24fb8c28b6614789ff887fc6533a5b0b303215a5a8f32: Status 404 returned error can't find the container with id 7a03bc493cfa0e9c9ca24fb8c28b6614789ff887fc6533a5b0b303215a5a8f32 Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.780342 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ffc0997dc1ac3f7c36635ba8e220f96343ba5299cb76f9182b2e4365c87fcee8 WatchSource:0}: Error finding container ffc0997dc1ac3f7c36635ba8e220f96343ba5299cb76f9182b2e4365c87fcee8: Status 404 returned error can't find the container with id ffc0997dc1ac3f7c36635ba8e220f96343ba5299cb76f9182b2e4365c87fcee8 Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.787217 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.795790 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.806365 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9965327614fc2e997958ae5118407020d50103c59b41253ba67c067c0736a489 WatchSource:0}: Error finding container 9965327614fc2e997958ae5118407020d50103c59b41253ba67c067c0736a489: Status 404 returned error can't find the container with id 9965327614fc2e997958ae5118407020d50103c59b41253ba67c067c0736a489 Jan 26 14:09:43 crc kubenswrapper[4922]: W0126 14:09:43.814967 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5c5e0c08c245e37517b3210cce944cebfcddf61c511f6c7b100205bb7708aff2 WatchSource:0}: Error finding container 5c5e0c08c245e37517b3210cce944cebfcddf61c511f6c7b100205bb7708aff2: Status 404 returned error can't find the container with id 5c5e0c08c245e37517b3210cce944cebfcddf61c511f6c7b100205bb7708aff2 Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.899244 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.901330 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.901392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.901412 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:43 crc kubenswrapper[4922]: I0126 14:09:43.901451 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:09:43 crc kubenswrapper[4922]: E0126 14:09:43.902220 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.025414 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.027576 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:22:56.836240255 +0000 UTC Jan 26 14:09:44 crc kubenswrapper[4922]: W0126 14:09:44.050067 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.050205 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.096294 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9965327614fc2e997958ae5118407020d50103c59b41253ba67c067c0736a489"} Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.097189 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ffc0997dc1ac3f7c36635ba8e220f96343ba5299cb76f9182b2e4365c87fcee8"} Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.098388 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d35a65bc250ba1ccd316c3819146ec799c16c3ca88a1536de769edf23daea23"} Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.104152 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7a03bc493cfa0e9c9ca24fb8c28b6614789ff887fc6533a5b0b303215a5a8f32"} Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.105159 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5c5e0c08c245e37517b3210cce944cebfcddf61c511f6c7b100205bb7708aff2"} Jan 26 14:09:44 crc kubenswrapper[4922]: W0126 14:09:44.113495 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.113637 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.441009 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Jan 26 14:09:44 crc kubenswrapper[4922]: W0126 14:09:44.464431 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.464546 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:44 crc kubenswrapper[4922]: W0126 14:09:44.622995 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.623170 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.703129 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.704638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.704717 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.704738 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.704784 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.705657 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 26 14:09:44 crc kubenswrapper[4922]: I0126 14:09:44.951107 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 14:09:44 crc kubenswrapper[4922]: E0126 14:09:44.952599 4922 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.026263 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.028514 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:31:00.697636604 +0000 UTC Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.111675 4922 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542" exitCode=0 Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.111848 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.111945 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.113181 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.113249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.113275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.114824 4922 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40" exitCode=0 Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.114926 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.114956 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.116354 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.116399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.116422 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.117267 4922 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29" exitCode=0 Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.117401 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.117431 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.118593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.118645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.118689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.121348 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.121378 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.121402 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.121428 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.121447 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.125586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.125647 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.125667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.130960 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9" exitCode=0 Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.131041 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9"} Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.131087 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.132272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.132301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.132312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.135216 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.136491 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.136560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:45 crc kubenswrapper[4922]: I0126 14:09:45.136580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.030446 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:31:39.2109094 +0000 UTC Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.138614 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.139096 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.139117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.139139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.141374 4922 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8" exitCode=0 Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.141479 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.141595 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.143196 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.143246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.143256 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.143680 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0f8f85a98054e53886511d2b982872884c925f3331ec72172233c1e15f36d2d7"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.143714 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.145248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.145300 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.145318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.148348 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.148644 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.148742 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.148766 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0"} Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.148861 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.150888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.150935 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.150949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.152084 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.152109 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.152120 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.306614 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.308295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.308356 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.308371 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:46 crc kubenswrapper[4922]: I0126 14:09:46.308408 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.031018 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:46:40.922795836 +0000 UTC Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.154542 4922 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d" exitCode=0 Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.154634 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d"} Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.154765 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.156116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.156170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.156189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.159822 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f"} Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.159914 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.159950 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.159958 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.160019 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161766 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161855 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161867 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.162017 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.161982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.162045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.567995 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.568346 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.570220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.570282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:47 crc kubenswrapper[4922]: I0126 14:09:47.570303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.032028 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:40:18.596491031 +0000 UTC Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.166944 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b"} Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.167008 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4"} Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.167028 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540"} Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.167011 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.167131 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.168589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.168656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:48 crc kubenswrapper[4922]: I0126 14:09:48.168678 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.004438 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.032347 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:05:46.899583689 +0000 UTC Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.176441 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d"} Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.176515 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31"} Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.176620 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.177962 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.178042 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.178090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.214042 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.352186 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.352540 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.352622 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.354829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.354884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.354905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.370868 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:49 crc kubenswrapper[4922]: I0126 14:09:49.641216 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.033354 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:05:32.447418311 +0000 UTC Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.180786 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.180821 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.182960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.183005 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.183018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.183749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.183809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.183832 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.663216 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.663930 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.665967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.666214 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:50 crc kubenswrapper[4922]: I0126 14:09:50.666345 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.034181 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:07:10.75295183 +0000 UTC Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.183832 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.183901 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.185508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.185688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.185810 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.185699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.186021 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.186108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.348611 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.349275 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.351019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.351149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.351171 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:51 crc kubenswrapper[4922]: I0126 14:09:51.353895 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.036375 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:19:43.931137637 +0000 UTC Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.187493 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.187633 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.189009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.189105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.189131 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:52 crc kubenswrapper[4922]: I0126 14:09:52.371475 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:53 crc kubenswrapper[4922]: I0126 14:09:53.036598 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:35:25.615016425 +0000 UTC Jan 26 14:09:53 crc kubenswrapper[4922]: I0126 14:09:53.191448 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:53 crc kubenswrapper[4922]: I0126 14:09:53.193046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:53 crc kubenswrapper[4922]: I0126 14:09:53.193170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:53 crc kubenswrapper[4922]: I0126 14:09:53.193198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:53 crc kubenswrapper[4922]: E0126 14:09:53.204571 4922 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 14:09:54 crc kubenswrapper[4922]: I0126 14:09:54.037392 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:51:45.49336669 +0000 UTC Jan 26 14:09:54 crc kubenswrapper[4922]: I0126 14:09:54.197904 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:54 crc kubenswrapper[4922]: I0126 14:09:54.199775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:54 crc kubenswrapper[4922]: I0126 14:09:54.199859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:54 crc kubenswrapper[4922]: I0126 14:09:54.199881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:54 crc kubenswrapper[4922]: I0126 14:09:54.204637 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.038355 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:59:51.314758489 +0000 UTC Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.200848 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.202193 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.202598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.202706 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.371968 4922 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 14:09:55 crc kubenswrapper[4922]: I0126 14:09:55.372518 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.026235 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.038808 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:37:02.582154779 +0000 UTC Jan 26 14:09:56 crc kubenswrapper[4922]: E0126 14:09:56.042168 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 14:09:56 crc kubenswrapper[4922]: E0126 14:09:56.310347 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.866768 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.867251 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.874732 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.874794 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.967679 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.967905 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.969116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.969177 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:56 crc kubenswrapper[4922]: I0126 14:09:56.969193 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:57 crc kubenswrapper[4922]: I0126 14:09:57.039300 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:51:12.782334372 +0000 UTC Jan 26 14:09:58 crc kubenswrapper[4922]: I0126 14:09:58.039696 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:52:52.764414604 +0000 UTC Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.040457 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:28:40.93669238 +0000 UTC Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.361644 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.361965 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.364407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.364454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.364464 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.374905 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.511287 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.513270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.513335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.513354 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:09:59 crc kubenswrapper[4922]: I0126 14:09:59.513392 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:09:59 crc kubenswrapper[4922]: E0126 14:09:59.518596 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 14:10:00 crc kubenswrapper[4922]: I0126 14:10:00.041267 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:36:41.184579036 +0000 UTC Jan 26 14:10:00 crc kubenswrapper[4922]: I0126 14:10:00.215059 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:10:00 crc kubenswrapper[4922]: I0126 14:10:00.216748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:00 crc kubenswrapper[4922]: I0126 14:10:00.216808 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:00 crc kubenswrapper[4922]: I0126 14:10:00.216821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.041856 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:16:08.425979739 +0000 UTC Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.852602 4922 trace.go:236] Trace[876958720]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:09:47.166) (total time: 14686ms): Jan 26 14:10:01 crc kubenswrapper[4922]: Trace[876958720]: ---"Objects listed" error: 14686ms (14:10:01.852) Jan 26 14:10:01 crc kubenswrapper[4922]: Trace[876958720]: [14.686414732s] [14.686414732s] END Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.852648 4922 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.852739 4922 trace.go:236] Trace[917397187]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:09:47.536) (total time: 14316ms): Jan 26 14:10:01 crc kubenswrapper[4922]: Trace[917397187]: ---"Objects listed" error: 14316ms (14:10:01.852) Jan 26 14:10:01 crc kubenswrapper[4922]: Trace[917397187]: [14.316619189s] [14.316619189s] END Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.852789 4922 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.853641 4922 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.863453 4922 trace.go:236] Trace[1393065826]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:09:47.126) (total time: 14736ms): Jan 26 14:10:01 crc kubenswrapper[4922]: Trace[1393065826]: ---"Objects listed" error: 14736ms (14:10:01.863) Jan 26 14:10:01 crc kubenswrapper[4922]: Trace[1393065826]: [14.736710308s] [14.736710308s] END Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.863491 4922 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.869772 4922 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 14:10:01 crc kubenswrapper[4922]: I0126 14:10:01.885198 4922 csr.go:261] certificate signing request csr-rhfxg is approved, waiting to be issued Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.029869 4922 apiserver.go:52] "Watching apiserver" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.042692 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:40:12.636700639 +0000 UTC Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.140609 4922 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.140927 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.141978 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142225 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142107 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142147 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142187 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142230 4922 trace.go:236] Trace[736457128]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:09:46.914) (total time: 15227ms): Jan 26 14:10:02 crc kubenswrapper[4922]: Trace[736457128]: ---"Objects listed" error: 15227ms (14:10:02.141) Jan 26 14:10:02 crc kubenswrapper[4922]: Trace[736457128]: [15.227473027s] [15.227473027s] END Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142324 4922 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.142032 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.143803 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.143835 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.144014 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.147305 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.147334 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.148518 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.148596 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.150433 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.151241 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.151253 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.151297 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.151295 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.152939 4922 csr.go:257] certificate signing request csr-rhfxg is issued Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.189756 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.212646 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.229570 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41458->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.229658 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41458->192.168.126.11:17697: read: connection reset by peer" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.229570 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55202->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.229734 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55202->192.168.126.11:17697: read: connection reset by peer" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.229955 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.230095 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.230125 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.230314 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.230345 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.236963 4922 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.243292 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255091 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255177 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255219 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255249 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255281 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255319 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255349 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255385 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255418 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255453 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255484 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255519 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255555 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255624 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255628 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255839 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255849 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255894 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255965 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256037 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256099 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255829 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256137 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.255837 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256108 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256216 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256253 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256338 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256370 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256406 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256481 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256514 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256587 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256628 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256663 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256694 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256798 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256836 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256123 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256340 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256479 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256941 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256605 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256694 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256721 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256741 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256995 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256797 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257010 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256859 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256965 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257050 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257211 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257338 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257522 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.256867 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257550 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257623 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257666 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257701 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257696 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257733 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257735 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257763 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257743 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257792 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257818 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257841 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257866 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257891 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257913 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257937 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.257988 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258013 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258017 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258039 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258020 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258079 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258091 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258128 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258162 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258195 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258221 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258247 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258275 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258304 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258332 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258356 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258383 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258408 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258438 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258463 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258487 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258529 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258555 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258580 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258606 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258631 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258657 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258684 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258751 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258849 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258872 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258894 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258947 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258971 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258994 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259018 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259046 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259589 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259624 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259647 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259673 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259698 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259722 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259747 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259772 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259801 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259828 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259893 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259919 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259946 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.259975 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260002 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260029 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260056 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260102 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260180 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260208 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260236 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260266 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260295 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260327 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260355 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260382 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260409 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260435 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260461 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260488 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260518 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260543 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260569 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260594 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260620 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260669 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260694 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260727 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260755 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260784 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260810 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260837 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260860 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260884 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260908 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260931 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260957 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.260987 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261014 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261043 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261089 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261114 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261141 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261167 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261191 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258053 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.258346 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261003 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261024 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261214 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261217 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261316 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261317 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261330 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261357 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261341 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261474 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261513 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261541 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261569 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261600 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261619 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261626 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261645 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261709 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261751 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261790 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261818 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261919 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261951 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261982 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262012 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262042 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262088 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262119 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262147 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262174 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262205 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262238 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262268 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262295 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262321 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262350 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262376 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262401 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262426 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262454 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262481 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262512 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262538 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262571 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262596 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262620 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262645 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262669 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262695 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262771 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262794 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262820 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262843 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262899 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262928 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262954 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262980 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263007 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263035 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263090 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263125 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263157 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263188 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263217 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263276 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263319 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263351 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263384 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263412 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263437 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263468 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263510 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263542 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263577 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263606 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263632 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263658 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263689 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263802 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263821 4922 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263837 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263851 4922 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263865 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263881 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263898 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263913 4922 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263928 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263943 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263959 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263975 4922 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263990 4922 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264006 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264021 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264034 4922 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264048 4922 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264061 4922 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265279 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265298 4922 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265312 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265328 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265378 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265394 4922 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265408 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265423 4922 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265437 4922 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265451 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265464 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265478 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265632 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265647 4922 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265664 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265679 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265693 4922 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265709 4922 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265723 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261712 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.261767 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262109 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262504 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262609 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.269546 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.262921 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263094 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263475 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263618 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263864 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263685 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.263904 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264228 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.264924 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.265051 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.265844 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.266372 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.266548 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.266953 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.267361 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.267428 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.267841 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.267927 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.267922 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.268038 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.268265 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.268311 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.268461 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.268473 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.268930 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.269256 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.269671 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.270301 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:02.770153756 +0000 UTC m=+19.972416638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.270421 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.271483 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.271444 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.271628 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.271662 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.272895 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.272953 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.273090 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.273852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.273779 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.274106 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.274248 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.274569 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.274594 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.275035 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.275409 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.275563 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.275748 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.275799 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.276231 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.276321 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.276354 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.276612 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.277044 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.277599 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:10:02.777570964 +0000 UTC m=+19.979833736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.277828 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.278246 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.278586 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.279760 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.279764 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.280151 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.280164 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.280310 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.280471 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.280489 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.281795 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.281893 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282150 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282240 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.282290 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282354 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.282500 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:02.782349564 +0000 UTC m=+19.984612336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282603 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282672 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282751 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.282915 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283146 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283392 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283618 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283676 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283687 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283753 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.283941 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.284106 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.284154 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.286544 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.286908 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.287304 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.288390 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.287997 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.288730 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.288905 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.288923 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289196 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289201 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289313 4922 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289358 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289595 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289637 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289803 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289835 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.289951 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.290247 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.290311 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.290522 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.290878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.291127 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.290152 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.298996 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.299593 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.300316 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.300904 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.301033 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.301220 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.301266 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.301365 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.301651 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.301802 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.303742 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.303763 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.303800 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.303829 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.303884 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.303920 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:02.803888925 +0000 UTC m=+20.006151697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.304650 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.304941 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.305196 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.305550 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.307285 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.308015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.308181 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.308302 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.308808 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.310136 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.310413 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.310542 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.311023 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.312540 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.314514 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.319343 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.319537 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.319562 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.319591 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.319683 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:02.819642678 +0000 UTC m=+20.021905450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.321265 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.321387 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.321596 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.322963 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.323286 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.324931 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.325906 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.326015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.327039 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.327253 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.327497 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.327643 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.328302 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.331231 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.333413 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.338263 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.338367 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tr7ks"] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.338628 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.338830 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.338890 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.341349 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.343380 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.343839 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.344569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.348992 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.349324 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.348435 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.353630 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.354168 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.355162 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.362828 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367236 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367315 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367332 4922 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367342 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367352 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367366 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367374 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367399 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367409 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367419 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367427 4922 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367435 4922 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367444 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367451 4922 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367475 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367483 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367492 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367502 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367512 4922 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367520 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367529 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367554 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367565 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367575 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367624 4922 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367634 4922 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367643 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367654 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367665 4922 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367675 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367683 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367707 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367716 4922 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367724 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367733 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367742 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367751 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367761 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367785 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367794 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367802 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367812 4922 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367821 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367829 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367838 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367864 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367872 4922 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367880 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367890 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367898 4922 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367905 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367913 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367936 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367944 4922 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367953 4922 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367961 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367970 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367978 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367987 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.367997 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368017 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368026 4922 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368034 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368042 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368049 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368058 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368113 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368121 4922 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368130 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368137 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368145 4922 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368155 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368163 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368187 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368195 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368205 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368214 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368224 4922 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368233 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368256 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368264 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368274 4922 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368283 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368292 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368300 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368309 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368317 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368342 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368351 4922 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368359 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368378 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368388 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368412 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368421 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368429 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368438 4922 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368445 4922 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368453 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368461 4922 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368470 4922 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368508 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368517 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368524 4922 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368533 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368542 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368549 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368574 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368582 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368590 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368599 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368608 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368617 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368625 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368649 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368657 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368670 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368677 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368685 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368693 4922 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368702 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368725 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368733 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368740 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368748 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368755 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368763 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368771 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368779 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368802 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368810 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368818 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368826 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368835 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368842 4922 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368850 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368861 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368884 4922 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368892 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368900 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368911 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368920 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368929 4922 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368937 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368960 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368968 4922 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368976 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368984 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.368997 4922 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.369006 4922 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.369014 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.369035 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.369044 4922 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.370636 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.372315 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.373148 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.388594 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.406002 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.407817 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.413038 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.414089 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.415321 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.415326 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.417527 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.438772 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.447000 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.452699 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.455981 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.458021 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.467312 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.471339 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrpx\" (UniqueName: \"kubernetes.io/projected/8907acd9-6134-47b2-b97c-dd03dea18383-kube-api-access-xbrpx\") pod \"node-resolver-tr7ks\" (UID: \"8907acd9-6134-47b2-b97c-dd03dea18383\") " pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.471542 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8907acd9-6134-47b2-b97c-dd03dea18383-hosts-file\") pod \"node-resolver-tr7ks\" (UID: \"8907acd9-6134-47b2-b97c-dd03dea18383\") " pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.471665 4922 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.471810 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.471889 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.471958 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.472108 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.472198 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.473494 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-2726b66bf308a073c89e9498c527c515ed8c2c57fdaa9aa10991937f7fe02b05 WatchSource:0}: Error finding container 2726b66bf308a073c89e9498c527c515ed8c2c57fdaa9aa10991937f7fe02b05: Status 404 returned error can't find the container with id 2726b66bf308a073c89e9498c527c515ed8c2c57fdaa9aa10991937f7fe02b05 Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.473667 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.473684 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.490691 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.503013 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.515429 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.530152 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.542171 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.553837 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.573481 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrpx\" (UniqueName: \"kubernetes.io/projected/8907acd9-6134-47b2-b97c-dd03dea18383-kube-api-access-xbrpx\") pod \"node-resolver-tr7ks\" (UID: \"8907acd9-6134-47b2-b97c-dd03dea18383\") " pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.573553 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8907acd9-6134-47b2-b97c-dd03dea18383-hosts-file\") pod \"node-resolver-tr7ks\" (UID: \"8907acd9-6134-47b2-b97c-dd03dea18383\") " pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.573634 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8907acd9-6134-47b2-b97c-dd03dea18383-hosts-file\") pod \"node-resolver-tr7ks\" (UID: \"8907acd9-6134-47b2-b97c-dd03dea18383\") " pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.576850 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.594641 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.595520 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrpx\" (UniqueName: \"kubernetes.io/projected/8907acd9-6134-47b2-b97c-dd03dea18383-kube-api-access-xbrpx\") pod \"node-resolver-tr7ks\" (UID: \"8907acd9-6134-47b2-b97c-dd03dea18383\") " pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.610227 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.676148 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tr7ks" Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.690337 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8907acd9_6134_47b2_b97c_dd03dea18383.slice/crio-618b55f99bcfce4c9ef781db896b14026d5238d4210e68c038983f3192c7169b WatchSource:0}: Error finding container 618b55f99bcfce4c9ef781db896b14026d5238d4210e68c038983f3192c7169b: Status 404 returned error can't find the container with id 618b55f99bcfce4c9ef781db896b14026d5238d4210e68c038983f3192c7169b Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.775135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.775379 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.775459 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:03.775434323 +0000 UTC m=+20.977697095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.876525 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.876625 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.876655 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.876682 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876807 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:10:03.876777934 +0000 UTC m=+21.079040706 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876845 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876869 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876881 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876918 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876963 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:03.876918378 +0000 UTC m=+21.079181310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.876992 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:03.87698033 +0000 UTC m=+21.079243302 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.877036 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.877047 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.877056 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: E0126 14:10:02.877101 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:03.877094073 +0000 UTC m=+21.079356845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:02 crc kubenswrapper[4922]: I0126 14:10:02.931334 4922 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931590 4922 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931612 4922 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931610 4922 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931628 4922 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931590 4922 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931651 4922 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931671 4922 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931671 4922 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931733 4922 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931732 4922 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931794 4922 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931824 4922 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:02 crc kubenswrapper[4922]: W0126 14:10:02.931838 4922 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.042946 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:47:56.302049823 +0000 UTC Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.095970 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.096514 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.097422 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.098117 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.098762 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.099322 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.099917 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.100557 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.101188 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.101728 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.104186 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.104829 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.105757 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.106373 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.106996 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.109188 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.109950 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.110866 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.111762 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.112550 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.115737 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.116696 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.117648 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.118520 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.119213 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.119816 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.120414 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.121398 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.121877 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.123874 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.124364 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.124814 4922 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.125405 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.126989 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.127504 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.128354 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.129890 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.130544 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.131520 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.132250 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.137468 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.137981 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.140175 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.140806 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.141868 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.142367 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.143358 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.143978 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.145108 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.145386 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.145581 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.148386 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.148843 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.149435 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.150430 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.150899 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.151840 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9zx7f"] Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.152221 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.154168 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 14:05:02 +0000 UTC, rotation deadline is 2026-10-17 09:12:42.368002806 +0000 UTC Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.154239 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6331h2m39.213765716s for next certificate rotation Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.158992 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.159218 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.159684 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.159717 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.169679 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.181304 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.207272 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.223748 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.224480 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.226901 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f" exitCode=255 Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.226991 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.228291 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tr7ks" event={"ID":"8907acd9-6134-47b2-b97c-dd03dea18383","Type":"ContainerStarted","Data":"086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.228332 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tr7ks" event={"ID":"8907acd9-6134-47b2-b97c-dd03dea18383","Type":"ContainerStarted","Data":"618b55f99bcfce4c9ef781db896b14026d5238d4210e68c038983f3192c7169b"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.229942 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.229968 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.230006 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c7a11a60af48194a97fdf938befb767fa47d4db82b54023dd3b23254061eda88"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.231725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.231755 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2726b66bf308a073c89e9498c527c515ed8c2c57fdaa9aa10991937f7fe02b05"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.233133 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8fa2c1558a1c1261c59d517ee844fde6552442a1a025cb173c5345aae257986e"} Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.241372 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.253364 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.267380 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.278044 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282159 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282347 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-cni-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282584 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-hostroot\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282618 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/103e8f62-57c7-4d49-b740-16d357710e61-multus-daemon-config\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282646 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-etc-kubernetes\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282668 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-netns\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282717 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-cnibin\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282748 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/103e8f62-57c7-4d49-b740-16d357710e61-cni-binary-copy\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282772 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvjp\" (UniqueName: \"kubernetes.io/projected/103e8f62-57c7-4d49-b740-16d357710e61-kube-api-access-ppvjp\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282864 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-kubelet\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282895 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-k8s-cni-cncf-io\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282949 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-system-cni-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.282982 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-conf-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.283005 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-multus-certs\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.283046 4922 scope.go:117] "RemoveContainer" containerID="3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.283058 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-cni-multus\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.283794 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-socket-dir-parent\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.283847 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-cni-bin\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.283927 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-os-release\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.296463 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.311353 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.325481 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.343868 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.358412 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.379948 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384450 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-etc-kubernetes\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384506 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-cni-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384528 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-hostroot\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384547 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/103e8f62-57c7-4d49-b740-16d357710e61-multus-daemon-config\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384563 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/103e8f62-57c7-4d49-b740-16d357710e61-cni-binary-copy\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384579 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-netns\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384599 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-cnibin\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384618 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvjp\" (UniqueName: \"kubernetes.io/projected/103e8f62-57c7-4d49-b740-16d357710e61-kube-api-access-ppvjp\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384635 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-kubelet\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-hostroot\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384659 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-system-cni-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384724 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-k8s-cni-cncf-io\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384745 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-kubelet\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384760 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-conf-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384784 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-conf-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384791 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-multus-certs\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384810 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-k8s-cni-cncf-io\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384814 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-cni-multus\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384832 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-cni-multus\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384861 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-cnibin\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384912 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-socket-dir-parent\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384863 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-multus-certs\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384776 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-run-netns\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384725 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-system-cni-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384867 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-socket-dir-parent\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384964 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-multus-cni-dir\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.385017 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-cni-bin\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.384994 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-host-var-lib-cni-bin\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.385207 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-os-release\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.385273 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-etc-kubernetes\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.385506 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/103e8f62-57c7-4d49-b740-16d357710e61-os-release\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.385635 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/103e8f62-57c7-4d49-b740-16d357710e61-cni-binary-copy\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.385652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/103e8f62-57c7-4d49-b740-16d357710e61-multus-daemon-config\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.395398 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.403125 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvjp\" (UniqueName: \"kubernetes.io/projected/103e8f62-57c7-4d49-b740-16d357710e61-kube-api-access-ppvjp\") pod \"multus-9zx7f\" (UID: \"103e8f62-57c7-4d49-b740-16d357710e61\") " pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.410253 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.467359 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9zx7f" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.549558 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-g5x8j"] Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.550053 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-52ctw"] Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.550211 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.550783 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.554218 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.554440 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m7p9"] Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.554609 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.554897 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.555044 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.555247 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.555345 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.555436 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.556774 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559101 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559293 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559411 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559484 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559419 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559558 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.559693 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.591462 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.613244 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.630901 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.642479 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.655854 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.672500 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.687572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.687854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.687887 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-kubelet\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.687936 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688017 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxf58\" (UniqueName: \"kubernetes.io/projected/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-kube-api-access-qxf58\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-systemd-units\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688267 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-node-log\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688336 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-log-socket\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688370 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-bin\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688397 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-env-overrides\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688427 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-etc-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688460 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-script-lib\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688494 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688522 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4dd\" (UniqueName: \"kubernetes.io/projected/d729a48f-6c8a-41a2-82f0-336269ebbfc7-kube-api-access-xk4dd\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688568 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovn-node-metrics-cert\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688632 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-var-lib-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-slash\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-ovn-kubernetes\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688720 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-netd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688745 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-ovn\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688766 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-config\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688789 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-system-cni-dir\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cnibin\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688851 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-systemd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688872 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7cd\" (UniqueName: \"kubernetes.io/projected/ec4defeb-f2b0-4291-9147-b37e5c43da57-kube-api-access-9m7cd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688896 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d729a48f-6c8a-41a2-82f0-336269ebbfc7-proxy-tls\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688920 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-netns\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688956 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-os-release\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.688981 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.689031 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d729a48f-6c8a-41a2-82f0-336269ebbfc7-rootfs\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.689055 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d729a48f-6c8a-41a2-82f0-336269ebbfc7-mcd-auth-proxy-config\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.690817 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.719937 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.734843 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.749932 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.764388 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.786980 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790534 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-os-release\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790606 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790656 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d729a48f-6c8a-41a2-82f0-336269ebbfc7-mcd-auth-proxy-config\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d729a48f-6c8a-41a2-82f0-336269ebbfc7-rootfs\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790737 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790750 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-os-release\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790839 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790762 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790905 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d729a48f-6c8a-41a2-82f0-336269ebbfc7-rootfs\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.790973 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-kubelet\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791026 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791040 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791111 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791087 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxf58\" (UniqueName: \"kubernetes.io/projected/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-kube-api-access-qxf58\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791219 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-systemd-units\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791268 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-node-log\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791275 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-systemd-units\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791292 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-log-socket\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791311 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-bin\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791320 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-node-log\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791330 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-env-overrides\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791351 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-etc-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791358 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-log-socket\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791374 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-script-lib\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791362 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-bin\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791399 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791417 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-etc-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4dd\" (UniqueName: \"kubernetes.io/projected/d729a48f-6c8a-41a2-82f0-336269ebbfc7-kube-api-access-xk4dd\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791464 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovn-node-metrics-cert\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791502 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-var-lib-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791592 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-slash\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791620 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-ovn-kubernetes\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791639 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-netd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791655 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-system-cni-dir\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791673 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cnibin\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791691 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-ovn\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791708 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-config\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791730 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d729a48f-6c8a-41a2-82f0-336269ebbfc7-proxy-tls\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791747 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-systemd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791768 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7cd\" (UniqueName: \"kubernetes.io/projected/ec4defeb-f2b0-4291-9147-b37e5c43da57-kube-api-access-9m7cd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791793 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791816 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-netns\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791856 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d729a48f-6c8a-41a2-82f0-336269ebbfc7-mcd-auth-proxy-config\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791879 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-netns\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791905 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-ovn\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791911 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cnibin\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791951 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-ovn-kubernetes\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791984 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-netd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.791993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-slash\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.792018 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-system-cni-dir\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.792156 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-var-lib-openvswitch\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.792204 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.792207 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.792281 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:05.792258098 +0000 UTC m=+22.994521060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.792481 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.792529 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-systemd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.792698 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-kubelet\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.793308 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-env-overrides\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.793691 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-script-lib\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.793789 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-config\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.797648 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d729a48f-6c8a-41a2-82f0-336269ebbfc7-proxy-tls\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.798498 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovn-node-metrics-cert\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.800189 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.817596 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4dd\" (UniqueName: \"kubernetes.io/projected/d729a48f-6c8a-41a2-82f0-336269ebbfc7-kube-api-access-xk4dd\") pod \"machine-config-daemon-g5x8j\" (UID: \"d729a48f-6c8a-41a2-82f0-336269ebbfc7\") " pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.817966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7cd\" (UniqueName: \"kubernetes.io/projected/ec4defeb-f2b0-4291-9147-b37e5c43da57-kube-api-access-9m7cd\") pod \"ovnkube-node-5m7p9\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.827230 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.831056 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.864396 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.866890 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.875384 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.888686 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.892244 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxf58\" (UniqueName: \"kubernetes.io/projected/a1c927f4-1d72-49fa-b6fd-9390de6d00d0-kube-api-access-qxf58\") pod \"multus-additional-cni-plugins-52ctw\" (UID: \"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\") " pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.892601 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.892693 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.892788 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.892836 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:10:05.892800926 +0000 UTC m=+23.095063698 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.892875 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.892900 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.892926 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:05.892909829 +0000 UTC m=+23.095172601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.892964 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893006 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893021 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893079 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893118 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:05.893091434 +0000 UTC m=+23.095354406 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893125 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893138 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:03 crc kubenswrapper[4922]: E0126 14:10:03.893170 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:05.893160247 +0000 UTC m=+23.095423019 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.894767 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: W0126 14:10:03.923206 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec4defeb_f2b0_4291_9147_b37e5c43da57.slice/crio-8198241e659e47b41d2d9176758d220eba65936f01649b5928d4dd521e7dae37 WatchSource:0}: Error finding container 8198241e659e47b41d2d9176758d220eba65936f01649b5928d4dd521e7dae37: Status 404 returned error can't find the container with id 8198241e659e47b41d2d9176758d220eba65936f01649b5928d4dd521e7dae37 Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.945095 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:03 crc kubenswrapper[4922]: I0126 14:10:03.984624 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.005720 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.044236 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:14:05.692310696 +0000 UTC Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.053480 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.075565 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.091355 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:04 crc kubenswrapper[4922]: E0126 14:10:04.091491 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.091857 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:04 crc kubenswrapper[4922]: E0126 14:10:04.091921 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.091969 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:04 crc kubenswrapper[4922]: E0126 14:10:04.092009 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.102702 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.119976 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.133945 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.144111 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.159449 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.180975 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.181215 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-52ctw" Jan 26 14:10:04 crc kubenswrapper[4922]: W0126 14:10:04.194482 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c927f4_1d72_49fa_b6fd_9390de6d00d0.slice/crio-409de81de9bb093b1119284c76c9043d909df19deed6ed38dd071ea463a4c74a WatchSource:0}: Error finding container 409de81de9bb093b1119284c76c9043d909df19deed6ed38dd071ea463a4c74a: Status 404 returned error can't find the container with id 409de81de9bb093b1119284c76c9043d909df19deed6ed38dd071ea463a4c74a Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.208874 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.218615 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.237267 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0" exitCode=0 Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.237360 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.237418 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"8198241e659e47b41d2d9176758d220eba65936f01649b5928d4dd521e7dae37"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.240274 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.240318 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.240331 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"33464eac06020e29f11f4da1642366753287449f06c49d230cdee46d2cb91a58"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.244573 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerStarted","Data":"92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.244606 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerStarted","Data":"095c379b79a4a530023354b08025b41d0d30e466f6b3f67ab81b9b3442f6d052"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.246158 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerStarted","Data":"409de81de9bb093b1119284c76c9043d909df19deed6ed38dd071ea463a4c74a"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.248794 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.250051 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd"} Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.250633 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.256540 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.269685 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.283425 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.299617 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.311608 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.321015 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.337362 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.352241 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.370593 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.447650 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.463101 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.463932 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.480724 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.494947 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.503785 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.503923 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.508588 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.521634 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.523978 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.540011 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.556722 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.569764 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.583972 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.595107 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.610822 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.623718 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.656346 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.700637 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.740143 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.774699 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:04 crc kubenswrapper[4922]: I0126 14:10:04.818672 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.044811 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:32:39.010133791 +0000 UTC Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.187419 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8w5kn"] Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.188947 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.191275 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.191446 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.191469 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.191648 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.202473 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.214351 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.229330 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.241526 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.256726 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4"} Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.256793 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e"} Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.256807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8"} Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.258677 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1c927f4-1d72-49fa-b6fd-9390de6d00d0" containerID="b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a" exitCode=0 Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.258692 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.258986 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerDied","Data":"b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a"} Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.283046 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.301227 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.308533 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a511a19d-84dc-4136-84e9-2060471c1fa0-serviceca\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.308609 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92xd\" (UniqueName: \"kubernetes.io/projected/a511a19d-84dc-4136-84e9-2060471c1fa0-kube-api-access-m92xd\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.308768 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a511a19d-84dc-4136-84e9-2060471c1fa0-host\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.319963 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.335986 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.352103 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.363959 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.376669 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.409212 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a511a19d-84dc-4136-84e9-2060471c1fa0-host\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.409260 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a511a19d-84dc-4136-84e9-2060471c1fa0-serviceca\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.409293 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92xd\" (UniqueName: \"kubernetes.io/projected/a511a19d-84dc-4136-84e9-2060471c1fa0-kube-api-access-m92xd\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.409523 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a511a19d-84dc-4136-84e9-2060471c1fa0-host\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.410630 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a511a19d-84dc-4136-84e9-2060471c1fa0-serviceca\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.415308 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.445202 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92xd\" (UniqueName: \"kubernetes.io/projected/a511a19d-84dc-4136-84e9-2060471c1fa0-kube-api-access-m92xd\") pod \"node-ca-8w5kn\" (UID: \"a511a19d-84dc-4136-84e9-2060471c1fa0\") " pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.477293 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.515686 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.521202 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8w5kn" Jan 26 14:10:05 crc kubenswrapper[4922]: W0126 14:10:05.543313 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda511a19d_84dc_4136_84e9_2060471c1fa0.slice/crio-2cd9387ac57375f3240affed9d8aec870a1962efa0860c2cbd5a2d2bb8876dbc WatchSource:0}: Error finding container 2cd9387ac57375f3240affed9d8aec870a1962efa0860c2cbd5a2d2bb8876dbc: Status 404 returned error can't find the container with id 2cd9387ac57375f3240affed9d8aec870a1962efa0860c2cbd5a2d2bb8876dbc Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.555673 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.596592 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.634166 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.675361 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.716843 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.760090 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.796122 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.813375 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.813592 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.813733 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:09.813679251 +0000 UTC m=+27.015942023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.835545 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.884130 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.913824 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.914006 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.914209 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914312 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:10:09.914284411 +0000 UTC m=+27.116547203 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914365 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914392 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914407 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.914443 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914473 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:09.914448756 +0000 UTC m=+27.116711718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.914524 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914612 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914687 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914713 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914737 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914689 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:09.914676492 +0000 UTC m=+27.116939264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:05 crc kubenswrapper[4922]: E0126 14:10:05.914790 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:09.914779705 +0000 UTC m=+27.117042487 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.918782 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.921221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.921264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.921276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.921425 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.982211 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.987141 4922 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.987548 4922 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.989220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.989301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.989324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.989358 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:05 crc kubenswrapper[4922]: I0126 14:10:05.989383 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:05Z","lastTransitionTime":"2026-01-26T14:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.004740 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.008967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.009003 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.009020 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.009040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.009053 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.025643 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.030518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.030593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.030620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.030655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.030682 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.042933 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.045430 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:30:24.638873153 +0000 UTC Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.053870 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.058793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.058864 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.058884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.058914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.058935 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.075280 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.077805 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.079367 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.079409 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.079423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.079443 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.079458 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.091757 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.091781 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.091883 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.091966 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.092134 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.092354 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.095796 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: E0126 14:10:06.095904 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.097573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.097602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.097616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.097631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.097642 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.200950 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.201019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.201035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.201080 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.201101 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.270173 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.270237 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.270256 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.274422 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.276674 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1c927f4-1d72-49fa-b6fd-9390de6d00d0" containerID="b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781" exitCode=0 Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.276746 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerDied","Data":"b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.278882 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8w5kn" event={"ID":"a511a19d-84dc-4136-84e9-2060471c1fa0","Type":"ContainerStarted","Data":"849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.285003 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8w5kn" event={"ID":"a511a19d-84dc-4136-84e9-2060471c1fa0","Type":"ContainerStarted","Data":"2cd9387ac57375f3240affed9d8aec870a1962efa0860c2cbd5a2d2bb8876dbc"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.293849 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.305927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.305972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.305983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.306002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.306015 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.314999 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.331016 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.343396 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.356543 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.370675 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.387943 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.399210 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.409185 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.409235 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.409248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.409267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.409281 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.432089 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.472845 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.511869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.511917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.511926 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.511946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.511958 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.515870 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.555350 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.595562 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.614306 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.614391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.614401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.614417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.614429 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.645354 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.674025 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.717301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.717361 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.717372 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.717391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.717404 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.735984 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.760122 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.795524 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.819938 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.819986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.819996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.820015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.820026 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.844503 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.889107 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.907114 4922 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.923507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.923592 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.923613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.923644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.923669 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:06Z","lastTransitionTime":"2026-01-26T14:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.937160 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.975548 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:06 crc kubenswrapper[4922]: I0126 14:10:06.998234 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.013653 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.015209 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.026089 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.026123 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.026135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.026152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.026164 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.039861 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.046200 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:35:56.890317753 +0000 UTC Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.075327 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.116652 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.129558 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.129616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.129629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.129653 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.129670 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.161205 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.201107 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.232256 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.232338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.232358 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.232388 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.232411 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.239955 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.278552 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.285541 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1c927f4-1d72-49fa-b6fd-9390de6d00d0" containerID="a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425" exitCode=0 Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.285715 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerDied","Data":"a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.318985 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.334633 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.334670 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.334681 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.334698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.334710 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.355259 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.395891 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.438100 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.438169 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.438187 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.438214 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.438235 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.442653 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.475558 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.516433 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.540531 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.540578 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.540590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.540609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.540620 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.560051 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.593820 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.633302 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.643165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.643203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.643213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.643228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.643241 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.686414 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.721342 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.746375 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.746428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.746440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.746459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.746474 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.766565 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.803027 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.843319 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.849822 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.849872 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.849882 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.849901 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.849912 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.878270 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.917031 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.953382 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.953475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.953502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.953539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.953563 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:07Z","lastTransitionTime":"2026-01-26T14:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.956327 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:07 crc kubenswrapper[4922]: I0126 14:10:07.997316 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.039241 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.046617 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:00:14.644030157 +0000 UTC Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.056364 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.056410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.056423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.056444 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.056460 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.078649 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.092522 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:08 crc kubenswrapper[4922]: E0126 14:10:08.092869 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.092530 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.092522 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:08 crc kubenswrapper[4922]: E0126 14:10:08.092984 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:08 crc kubenswrapper[4922]: E0126 14:10:08.093253 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.118034 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.159931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.159982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.159991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.160009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.160021 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.169757 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.212615 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.239876 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.263419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.263464 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.263475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.263495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.263513 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.280113 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.293723 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1c927f4-1d72-49fa-b6fd-9390de6d00d0" containerID="dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd" exitCode=0 Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.293779 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerDied","Data":"dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.300423 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.314171 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.357013 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.365465 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.365513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.365530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.365576 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.365594 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.399655 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.436010 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.469507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.469563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.469574 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.469600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.469612 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.477449 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.515665 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.560890 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.572540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.572587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.572599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.572615 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.572628 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.594919 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.633018 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.674037 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.675891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.675935 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.675946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.675966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.675978 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.714737 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.755222 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.779527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.779583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.779594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.779635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.779650 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.795858 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.845467 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.872814 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.882538 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.882580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.882593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.882611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.882624 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.913291 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.959215 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.985001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.985356 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.985430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.985685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.985751 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:08Z","lastTransitionTime":"2026-01-26T14:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:08 crc kubenswrapper[4922]: I0126 14:10:08.994408 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.037879 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.046886 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:56:58.611287023 +0000 UTC Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.088914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.089360 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.089500 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.089809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.089925 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.192668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.192711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.192720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.192738 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.192750 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.295229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.295278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.295293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.295310 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.295320 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.306531 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1c927f4-1d72-49fa-b6fd-9390de6d00d0" containerID="336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249" exitCode=0 Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.306580 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerDied","Data":"336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.330261 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.365762 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.385647 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.397665 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.397705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.397714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.397729 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.397739 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.406732 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.425730 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.445920 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.460219 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.490523 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.506479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.506543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.506555 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.506576 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.506589 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.507344 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.525819 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.542583 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.553791 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.567313 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.596373 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.609018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.609090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.609105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.609126 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.609139 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.633694 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.712009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.712129 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.712149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.712172 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.712189 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.815403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.815485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.815508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.815536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.815556 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.860763 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.860917 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.861000 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:17.860980688 +0000 UTC m=+35.063243460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.919613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.919659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.919668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.919693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.919707 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:09Z","lastTransitionTime":"2026-01-26T14:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.961912 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962184 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:10:17.962135264 +0000 UTC m=+35.164398056 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.962271 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.962555 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962582 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962624 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962649 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:09 crc kubenswrapper[4922]: I0126 14:10:09.962660 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962747 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:17.962713191 +0000 UTC m=+35.164976113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962805 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962861 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:17.962848945 +0000 UTC m=+35.165111737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962858 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962923 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.962940 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:09 crc kubenswrapper[4922]: E0126 14:10:09.963006 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:17.962982749 +0000 UTC m=+35.165245521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.024217 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.024313 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.024332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.024362 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.024385 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.048377 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:06:52.203090595 +0000 UTC Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.092402 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.092495 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.092421 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:10 crc kubenswrapper[4922]: E0126 14:10:10.092608 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:10 crc kubenswrapper[4922]: E0126 14:10:10.092802 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:10 crc kubenswrapper[4922]: E0126 14:10:10.092958 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.127720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.127774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.127788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.127808 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.127825 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.231444 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.231990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.232004 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.232027 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.232042 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.317869 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1c927f4-1d72-49fa-b6fd-9390de6d00d0" containerID="1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3" exitCode=0 Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.317960 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerDied","Data":"1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.327911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.328583 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.328629 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.328641 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.335127 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.335161 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.335173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.335191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.335203 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.354475 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.366898 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.369137 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.378688 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.394559 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.414280 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.427971 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.437841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.437890 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.437898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.437918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.437928 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.442940 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.458740 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.480452 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.496803 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.508361 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.520589 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.534533 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.540354 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.540617 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.540684 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.540792 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.540868 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.548811 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.564798 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.589821 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.604700 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.619188 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.630865 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.643590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.643640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.643652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.643673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.643690 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.644226 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.659609 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.673146 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.688698 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.717354 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.731468 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.746719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.746762 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.746774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.746795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.746809 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.756400 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.776151 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.794703 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.813957 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.833833 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.849924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.849966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.849980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.849999 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.850015 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.852755 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.953280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.953334 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.953346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.953368 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:10 crc kubenswrapper[4922]: I0126 14:10:10.953382 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:10Z","lastTransitionTime":"2026-01-26T14:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.049348 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:33:37.902110723 +0000 UTC Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.056873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.056949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.056972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.057000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.057020 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.160916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.160976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.160990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.161013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.161029 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.264682 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.264760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.264777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.264801 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.264815 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.339024 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" event={"ID":"a1c927f4-1d72-49fa-b6fd-9390de6d00d0","Type":"ContainerStarted","Data":"da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.358616 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.368254 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.368304 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.368316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.368336 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.368350 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.380951 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.401902 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.430707 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.454504 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.471293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.471347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.471369 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.471393 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.471409 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.485430 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.522711 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.549031 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.562721 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.574027 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.574084 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.574098 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.574119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.574134 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.580931 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.597595 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.622000 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.643958 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.665189 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.676423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.676474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.676486 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.676507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.676522 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.689901 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.779392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.779440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.779451 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.779468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.779478 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.882429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.882482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.882492 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.882512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.882526 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.986296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.986670 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.986842 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.986976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:11 crc kubenswrapper[4922]: I0126 14:10:11.987126 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:11Z","lastTransitionTime":"2026-01-26T14:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.050021 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:00:38.282876081 +0000 UTC Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091241 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091589 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:12 crc kubenswrapper[4922]: E0126 14:10:12.091740 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.091844 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:12 crc kubenswrapper[4922]: E0126 14:10:12.092050 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.092264 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:12 crc kubenswrapper[4922]: E0126 14:10:12.092599 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.194548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.194874 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.195018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.195205 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.195371 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.298791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.298835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.298847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.298866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.298880 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.402237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.402302 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.402320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.402347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.402367 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.505482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.505535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.505551 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.505606 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.505626 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.608357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.608400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.608411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.608427 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.608440 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.711886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.711960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.711979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.712015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.712034 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.815053 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.815142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.815189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.815218 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.815232 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.918711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.918830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.918897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.918930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:12 crc kubenswrapper[4922]: I0126 14:10:12.918995 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:12Z","lastTransitionTime":"2026-01-26T14:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.022274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.022323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.022334 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.022353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.022366 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.050664 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:59:22.913902805 +0000 UTC Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.112551 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.129645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.129736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.129763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.129798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.129833 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.136195 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.164462 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.185825 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.205217 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.222649 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.233433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.233471 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.233484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.233518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.233533 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.238484 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.253458 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.269634 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.290170 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.314681 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.333097 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.336689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.336742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.336756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.336784 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.336801 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.349344 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/0.log" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.353133 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.353229 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e" exitCode=1 Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.353301 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.354530 4922 scope.go:117] "RemoveContainer" containerID="26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.369926 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.388181 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.408035 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.429303 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.440502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.440550 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.440563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.440580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.440592 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.443576 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.456116 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.474933 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.494592 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.515689 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.539265 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:12Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0126 14:10:12.656522 6225 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:10:12.656529 6225 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:10:12.656581 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:10:12.656596 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:10:12.656605 6225 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:10:12.656615 6225 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 14:10:12.656624 6225 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 14:10:12.656859 6225 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 14:10:12.656925 6225 factory.go:656] Stopping watch factory\\\\nI0126 14:10:12.657111 6225 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657581 6225 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657644 6225 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.542982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.543008 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.543017 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.543032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.543043 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.556703 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.569675 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.581611 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.595025 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.605827 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.626944 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.640728 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.645734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.645986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.646000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.646022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.646040 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.749443 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.749492 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.749502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.749521 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.749535 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.852732 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.852777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.852789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.852807 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.852818 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.955175 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.955222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.955231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.955252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:13 crc kubenswrapper[4922]: I0126 14:10:13.955267 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:13Z","lastTransitionTime":"2026-01-26T14:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.051055 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:31:19.226358986 +0000 UTC Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.058178 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.058246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.058269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.058303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.058326 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.091890 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.091994 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:14 crc kubenswrapper[4922]: E0126 14:10:14.092030 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.091994 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:14 crc kubenswrapper[4922]: E0126 14:10:14.092270 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:14 crc kubenswrapper[4922]: E0126 14:10:14.092425 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.161270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.161318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.161331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.161349 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.161361 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.264819 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.264878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.264888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.264907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.264940 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.361269 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/0.log" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.365285 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.365700 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.367936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.367992 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.368010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.368034 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.368051 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.389447 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.405192 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.437842 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.460277 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.470596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.470678 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.470698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.470731 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.470750 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.485820 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.507712 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.531059 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.551191 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.567827 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.572848 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.572907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.572920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.572940 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.572953 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.584336 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.600827 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.621755 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.639743 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.658932 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.676580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.676626 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.676636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.676653 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.676664 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.698099 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:12Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0126 14:10:12.656522 6225 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:10:12.656529 6225 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:10:12.656581 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:10:12.656596 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:10:12.656605 6225 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:10:12.656615 6225 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 14:10:12.656624 6225 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 14:10:12.656859 6225 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 14:10:12.656925 6225 factory.go:656] Stopping watch factory\\\\nI0126 14:10:12.657111 6225 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657581 6225 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657644 6225 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.779809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.779860 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.779870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.779888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.779898 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.883323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.883407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.883429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.883461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.883486 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.987770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.987865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.987882 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.987909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:14 crc kubenswrapper[4922]: I0126 14:10:14.987928 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:14Z","lastTransitionTime":"2026-01-26T14:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.052237 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:05:23.03788423 +0000 UTC Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.091222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.091299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.091318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.091346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.091369 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.195419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.195484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.195499 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.195523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.195538 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.299339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.299446 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.299468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.299503 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.299529 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.370845 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/1.log" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.377393 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/0.log" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.381227 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189" exitCode=1 Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.381286 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.381330 4922 scope.go:117] "RemoveContainer" containerID="26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.382764 4922 scope.go:117] "RemoveContainer" containerID="cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189" Jan 26 14:10:15 crc kubenswrapper[4922]: E0126 14:10:15.383003 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.395621 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.401871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.401917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.401931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.401951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.401965 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.411342 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.423540 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.446868 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.460762 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.480597 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.498122 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.505238 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.505271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.505281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.505303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.505314 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.510463 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.525202 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.545619 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.557793 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.575164 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.595183 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:12Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0126 14:10:12.656522 6225 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:10:12.656529 6225 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:10:12.656581 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:10:12.656596 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:10:12.656605 6225 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:10:12.656615 6225 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 14:10:12.656624 6225 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 14:10:12.656859 6225 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 14:10:12.656925 6225 factory.go:656] Stopping watch factory\\\\nI0126 14:10:12.657111 6225 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657581 6225 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657644 6225 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.608087 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.608155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.608174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.608196 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.608212 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.618420 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.639107 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.711519 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.711580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.711655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.711679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.711693 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.815099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.815142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.815154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.815175 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.815197 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.832591 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7"] Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.835640 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.841449 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.841597 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.861776 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.881771 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.901569 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.917574 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.917603 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.917613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.917629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.917640 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:15Z","lastTransitionTime":"2026-01-26T14:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.920786 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.931600 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.935348 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb3fd63f-eedf-4790-88f6-325e446b37c2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.935417 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb3fd63f-eedf-4790-88f6-325e446b37c2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.935465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvb6\" (UniqueName: \"kubernetes.io/projected/eb3fd63f-eedf-4790-88f6-325e446b37c2-kube-api-access-thvb6\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.935496 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb3fd63f-eedf-4790-88f6-325e446b37c2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.953304 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.970816 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:15 crc kubenswrapper[4922]: I0126 14:10:15.986578 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.001592 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.016703 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.020381 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.020441 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.020459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.020488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.020506 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.032282 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.036041 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb3fd63f-eedf-4790-88f6-325e446b37c2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.036098 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb3fd63f-eedf-4790-88f6-325e446b37c2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.036137 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvb6\" (UniqueName: \"kubernetes.io/projected/eb3fd63f-eedf-4790-88f6-325e446b37c2-kube-api-access-thvb6\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.036167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb3fd63f-eedf-4790-88f6-325e446b37c2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.036873 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb3fd63f-eedf-4790-88f6-325e446b37c2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.037376 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb3fd63f-eedf-4790-88f6-325e446b37c2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.043186 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb3fd63f-eedf-4790-88f6-325e446b37c2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.048143 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.053227 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:46:42.179112033 +0000 UTC Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.053936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvb6\" (UniqueName: \"kubernetes.io/projected/eb3fd63f-eedf-4790-88f6-325e446b37c2-kube-api-access-thvb6\") pod \"ovnkube-control-plane-749d76644c-cfbd7\" (UID: \"eb3fd63f-eedf-4790-88f6-325e446b37c2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.067627 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.083559 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.091556 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.091564 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.091674 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.091563 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.091822 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.092104 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.106453 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d74eb6c938f9bdf289db56c2cf7d1fef1c18171a1d8b05eedefdc6ed05995e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:12Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0126 14:10:12.656522 6225 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:10:12.656529 6225 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:10:12.656581 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:10:12.656596 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:10:12.656605 6225 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:10:12.656615 6225 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 14:10:12.656624 6225 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 14:10:12.656859 6225 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 14:10:12.656925 6225 factory.go:656] Stopping watch factory\\\\nI0126 14:10:12.657111 6225 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657581 6225 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:10:12.657644 6225 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.122971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.123045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.123078 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.123099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.123110 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.124531 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.161193 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" Jan 26 14:10:16 crc kubenswrapper[4922]: W0126 14:10:16.176620 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb3fd63f_eedf_4790_88f6_325e446b37c2.slice/crio-e9b6f97ba373f1ebe818ce333aed57ff166b27d265b9b6a62fc83e93f5ba30f9 WatchSource:0}: Error finding container e9b6f97ba373f1ebe818ce333aed57ff166b27d265b9b6a62fc83e93f5ba30f9: Status 404 returned error can't find the container with id e9b6f97ba373f1ebe818ce333aed57ff166b27d265b9b6a62fc83e93f5ba30f9 Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.223660 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.223708 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.223719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.223739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.223750 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.240525 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.246870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.246907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.246916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.246932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.246942 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.268677 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.274492 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.274553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.274572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.274597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.274617 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.294777 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.301282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.301338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.301348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.301368 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.301381 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.315739 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.321732 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.321792 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.321811 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.321837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.321854 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.336732 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.336939 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.343293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.343340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.343353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.343369 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.343380 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.385905 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" event={"ID":"eb3fd63f-eedf-4790-88f6-325e446b37c2","Type":"ContainerStarted","Data":"e9b6f97ba373f1ebe818ce333aed57ff166b27d265b9b6a62fc83e93f5ba30f9"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.388117 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/1.log" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.391457 4922 scope.go:117] "RemoveContainer" containerID="cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189" Jan 26 14:10:16 crc kubenswrapper[4922]: E0126 14:10:16.391662 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.408759 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.425319 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.440330 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.445931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.445963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.445972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.445988 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.445997 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.454691 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.471967 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.486123 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.502851 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.523810 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.540032 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.548793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.549024 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.549158 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.549255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.549347 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.556435 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.569859 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.584762 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.600644 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.617658 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.645539 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.652014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.652052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.652085 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.652109 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.652121 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.660784 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.754294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.754328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.754336 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.754352 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.754363 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.856512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.856554 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.856570 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.856588 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.856599 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.959651 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.959704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.959715 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.959733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:16 crc kubenswrapper[4922]: I0126 14:10:16.959744 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:16Z","lastTransitionTime":"2026-01-26T14:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.054185 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:01:48.864652233 +0000 UTC Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.063625 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.063679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.063694 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.063719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.063734 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.167914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.167966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.167986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.168007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.168019 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.270811 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.270886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.270911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.270943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.270968 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.348747 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pzxnt"] Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.351457 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:17 crc kubenswrapper[4922]: E0126 14:10:17.351661 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.370869 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.374248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.374306 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.374325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.374353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.374372 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.391553 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.398352 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" event={"ID":"eb3fd63f-eedf-4790-88f6-325e446b37c2","Type":"ContainerStarted","Data":"aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.398438 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" event={"ID":"eb3fd63f-eedf-4790-88f6-325e446b37c2","Type":"ContainerStarted","Data":"2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.409647 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.426096 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.438723 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.453619 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87h8\" (UniqueName: \"kubernetes.io/projected/756187f6-68ea-4408-8d07-f691e16b4484-kube-api-access-z87h8\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.453685 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.456623 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.470476 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.476586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.476627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.476643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.476667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.476683 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.485231 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.504895 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.516839 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.531458 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.555246 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.555330 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87h8\" (UniqueName: \"kubernetes.io/projected/756187f6-68ea-4408-8d07-f691e16b4484-kube-api-access-z87h8\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:17 crc kubenswrapper[4922]: E0126 14:10:17.555916 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:17 crc kubenswrapper[4922]: E0126 14:10:17.556030 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:18.056004726 +0000 UTC m=+35.258267558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.561010 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.577794 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87h8\" (UniqueName: \"kubernetes.io/projected/756187f6-68ea-4408-8d07-f691e16b4484-kube-api-access-z87h8\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.578972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.579029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.579044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.579083 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.579099 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.580118 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.598205 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.611844 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.628051 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.643301 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.663701 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.677821 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.682193 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.682240 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.682256 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.682281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.682298 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.693669 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.708142 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.721601 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.736213 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.748519 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.765569 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.784784 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.784838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.784852 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.784873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.784889 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.794677 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.810478 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.833300 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.852208 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.869706 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.883399 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.890659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.890725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.890747 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.890777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.890797 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.901372 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.915525 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.933695 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.960601 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:17 crc kubenswrapper[4922]: E0126 14:10:17.960882 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:17 crc kubenswrapper[4922]: E0126 14:10:17.961019 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:33.96099132 +0000 UTC m=+51.163254092 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.994305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.994361 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.994374 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.994397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:17 crc kubenswrapper[4922]: I0126 14:10:17.994412 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:17Z","lastTransitionTime":"2026-01-26T14:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.055564 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:28:00.834615396 +0000 UTC Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.061542 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.061721 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:10:34.061674013 +0000 UTC m=+51.263936845 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.061800 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.062123 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.062201 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.062272 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062308 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062399 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:34.062375913 +0000 UTC m=+51.264638725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062528 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062567 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062674 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062589 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062713 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062741 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062762 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062684 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:19.062653782 +0000 UTC m=+36.264916564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.062881 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:34.062848308 +0000 UTC m=+51.265111260 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.063149 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:34.063127366 +0000 UTC m=+51.265390398 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.091728 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.091761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.091816 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.091864 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.091964 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:18 crc kubenswrapper[4922]: E0126 14:10:18.092245 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.097036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.097171 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.097198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.097225 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.097243 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.200192 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.200241 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.200253 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.200284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.200297 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.303698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.303757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.303771 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.303791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.303807 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.405889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.405941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.405950 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.405969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.405980 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.509454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.509534 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.509604 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.509631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.509650 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.613180 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.613285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.613310 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.613343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.613363 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.716560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.716605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.716618 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.716639 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.716650 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.820250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.820328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.820347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.820376 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.820393 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.923596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.923700 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.923740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.923775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:18 crc kubenswrapper[4922]: I0126 14:10:18.923802 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:18Z","lastTransitionTime":"2026-01-26T14:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.026974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.027132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.027160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.027191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.027215 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.057277 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:52:29.874615931 +0000 UTC Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.076593 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:19 crc kubenswrapper[4922]: E0126 14:10:19.076923 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:19 crc kubenswrapper[4922]: E0126 14:10:19.077123 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:21.077024226 +0000 UTC m=+38.279287038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.092506 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:19 crc kubenswrapper[4922]: E0126 14:10:19.092753 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.131204 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.131263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.131276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.131300 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.131315 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.235295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.235380 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.235400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.235428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.235447 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.339930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.340057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.340142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.340227 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.340250 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.443513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.443596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.443619 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.443651 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.443674 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.546564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.546608 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.546616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.546633 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.546646 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.647987 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.649002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.649035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.649047 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.649078 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.649090 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.667014 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.683928 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.695023 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.712862 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.739845 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.752418 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.752461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.752475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.752494 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.752508 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.756099 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.773188 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.790903 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.804135 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.823783 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.837647 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.854132 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.855404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.855448 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.855460 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.855479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.855539 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.869996 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.893635 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.919618 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.940098 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.959542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.959626 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.959652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.959697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.959730 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:19Z","lastTransitionTime":"2026-01-26T14:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:19 crc kubenswrapper[4922]: I0126 14:10:19.964207 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.057844 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:27:02.64538177 +0000 UTC Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.063164 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.063239 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.063259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.063291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.063314 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.091741 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.091851 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:20 crc kubenswrapper[4922]: E0126 14:10:20.091883 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.091748 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:20 crc kubenswrapper[4922]: E0126 14:10:20.092024 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:20 crc kubenswrapper[4922]: E0126 14:10:20.092238 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.167154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.167245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.167265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.167296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.167319 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.271282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.271368 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.271394 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.271429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.271451 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.375194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.375265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.375278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.375300 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.375314 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.479032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.479122 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.479141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.479170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.479189 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.583191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.583296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.583324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.583363 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.583392 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.687916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.687987 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.688012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.688050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.688120 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.791824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.791877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.791889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.791909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.791920 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.894680 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.894755 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.894774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.894804 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.894832 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.997274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.997325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.997339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.997357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:20 crc kubenswrapper[4922]: I0126 14:10:20.997371 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:20Z","lastTransitionTime":"2026-01-26T14:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.058201 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:43:00.542973149 +0000 UTC Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.092013 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:21 crc kubenswrapper[4922]: E0126 14:10:21.092272 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.100832 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.100870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.100882 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.100906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.100917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.101985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:21 crc kubenswrapper[4922]: E0126 14:10:21.102407 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:21 crc kubenswrapper[4922]: E0126 14:10:21.102546 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:25.102506018 +0000 UTC m=+42.304768840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.205252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.205314 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.205332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.205355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.205368 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.308470 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.308525 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.308540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.308561 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.308577 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.412786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.412872 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.412893 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.412922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.412943 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.515335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.515399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.515416 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.515440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.515456 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.618736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.618814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.618825 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.618848 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.618859 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.721652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.721728 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.721747 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.721776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.721796 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.825669 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.825727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.825742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.825763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.825777 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.928054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.928108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.928116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.928131 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:21 crc kubenswrapper[4922]: I0126 14:10:21.928141 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:21Z","lastTransitionTime":"2026-01-26T14:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.031956 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.032013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.032022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.032040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.032052 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.059201 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:56:19.975704605 +0000 UTC Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.091769 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:22 crc kubenswrapper[4922]: E0126 14:10:22.091938 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.092047 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:22 crc kubenswrapper[4922]: E0126 14:10:22.092245 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.092135 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:22 crc kubenswrapper[4922]: E0126 14:10:22.092483 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.135478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.135580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.135605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.135636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.135655 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.238837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.238895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.238909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.238928 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.238939 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.342234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.342281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.342290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.342308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.342317 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.445650 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.445716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.445729 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.445751 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.445763 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.558752 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.558786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.558795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.558809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.558820 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.661463 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.661506 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.661516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.661531 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.661540 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.763757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.763809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.763821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.763836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.763848 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.866330 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.866397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.866410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.866432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.866447 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.969674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.969724 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.969735 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.969753 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:22 crc kubenswrapper[4922]: I0126 14:10:22.969765 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:22Z","lastTransitionTime":"2026-01-26T14:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.059896 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:55:30.338704061 +0000 UTC Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.072608 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.072655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.072667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.072688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.072702 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.092162 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:23 crc kubenswrapper[4922]: E0126 14:10:23.092337 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.111843 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.128613 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.147562 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.165298 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.175351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.175413 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.175430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.175454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.175470 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.180152 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.195165 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.211668 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.231323 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.244606 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.261624 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.278269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.278344 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.278356 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.278381 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.278411 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.283034 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.298550 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.313927 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.337900 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.357350 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.375728 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.381114 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.381172 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.381182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.381201 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.381211 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.392611 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.484244 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.484294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.484303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.484319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.484330 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.587635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.587683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.587693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.587711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.587720 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.690830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.690918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.690943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.690982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.691005 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.794253 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.794323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.794342 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.794375 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.794396 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.897437 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.897502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.897518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.897548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:23 crc kubenswrapper[4922]: I0126 14:10:23.897565 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:23Z","lastTransitionTime":"2026-01-26T14:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.001539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.001587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.001597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.001616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.001628 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.060403 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:38:53.516597649 +0000 UTC Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.091958 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.092104 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:24 crc kubenswrapper[4922]: E0126 14:10:24.092138 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.092104 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:24 crc kubenswrapper[4922]: E0126 14:10:24.092325 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:24 crc kubenswrapper[4922]: E0126 14:10:24.092846 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.105026 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.105116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.105133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.105173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.105193 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.207991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.208097 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.208117 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.208144 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.208166 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.311780 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.311866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.311883 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.311905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.311923 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.415301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.415342 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.415355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.415373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.415386 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.518311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.518375 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.518399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.518429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.518450 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.622500 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.622591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.622619 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.622654 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.622683 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.726268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.726343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.726360 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.726386 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.726404 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.830585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.830685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.830710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.830746 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.830772 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.934672 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.935173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.935356 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.935598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:24 crc kubenswrapper[4922]: I0126 14:10:24.935773 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:24Z","lastTransitionTime":"2026-01-26T14:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.040154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.040517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.040701 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.040862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.040998 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.060811 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:05:20.727482554 +0000 UTC Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.091894 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:25 crc kubenswrapper[4922]: E0126 14:10:25.092169 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.145739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.145817 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.145836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.145869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.145890 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.155574 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:25 crc kubenswrapper[4922]: E0126 14:10:25.155863 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:25 crc kubenswrapper[4922]: E0126 14:10:25.156037 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:33.156002726 +0000 UTC m=+50.358265598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.250003 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.250113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.250141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.250181 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.250205 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.353487 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.353556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.353578 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.353605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.353626 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.455925 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.455988 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.456005 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.456030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.456050 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.559830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.559904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.559923 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.559951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.559972 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.663128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.663174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.663186 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.663206 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.663220 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.765824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.765868 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.765877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.765894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.765905 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.869143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.869268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.869283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.869319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.869336 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.972482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.972554 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.972564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.972583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:25 crc kubenswrapper[4922]: I0126 14:10:25.972593 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:25Z","lastTransitionTime":"2026-01-26T14:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.061595 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:40:27.113106919 +0000 UTC Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.075319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.075373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.075386 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.075410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.075424 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.091652 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.091717 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.091835 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.091717 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.092001 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.092337 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.179373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.179443 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.179536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.179568 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.179588 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.283572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.283671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.283706 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.283745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.283781 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.386412 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.386465 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.386478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.386500 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.386514 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.490004 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.490080 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.490093 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.490115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.490127 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.593396 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.593462 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.593482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.593509 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.593529 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.605872 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.605924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.605940 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.605966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.605985 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.621329 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:26Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.626710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.626755 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.626772 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.626791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.626810 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.643902 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:26Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.648923 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.648998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.649019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.649046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.649147 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.664706 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:26Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.669284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.669334 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.669398 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.669417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.669429 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.685254 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:26Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.689830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.689864 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.689875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.689892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.689903 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.707379 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:26Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:26 crc kubenswrapper[4922]: E0126 14:10:26.707532 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.709430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.709460 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.709469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.709483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.709494 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.813595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.813676 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.813696 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.813724 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.813745 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.917251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.917310 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.917329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.917357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:26 crc kubenswrapper[4922]: I0126 14:10:26.917381 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:26Z","lastTransitionTime":"2026-01-26T14:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.019800 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.020242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.020351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.020392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.020404 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.061741 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:34:31.05171396 +0000 UTC Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.091830 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:27 crc kubenswrapper[4922]: E0126 14:10:27.092095 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.122930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.123010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.123029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.123059 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.123112 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.226232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.226296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.226313 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.226337 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.226355 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.330211 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.330275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.330292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.330328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.330347 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.433030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.433111 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.433124 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.433140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.433153 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.537111 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.537165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.537174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.537192 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.537203 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.640391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.640436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.640446 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.640467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.640478 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.743990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.744052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.744087 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.744113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.744128 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.848440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.848520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.848540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.848573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.848596 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.952388 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.952439 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.952452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.952471 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:27 crc kubenswrapper[4922]: I0126 14:10:27.952483 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:27Z","lastTransitionTime":"2026-01-26T14:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.056485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.056535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.056544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.056561 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.056572 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.062717 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:44:32.447261632 +0000 UTC Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.091815 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.091882 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:28 crc kubenswrapper[4922]: E0126 14:10:28.092094 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.092150 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:28 crc kubenswrapper[4922]: E0126 14:10:28.092350 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:28 crc kubenswrapper[4922]: E0126 14:10:28.092462 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.160198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.160270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.160293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.160329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.160354 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.263868 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.263927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.263945 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.263973 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.263992 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.367336 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.367403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.367417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.367437 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.367450 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.470667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.470731 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.470750 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.470847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.470917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.575245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.575302 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.575319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.575348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.575554 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.678896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.678944 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.678955 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.678976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.678990 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.781985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.782047 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.782096 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.782125 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.782145 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.885312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.885383 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.885407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.885440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.885463 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.988583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.988657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.988674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.988704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:28 crc kubenswrapper[4922]: I0126 14:10:28.988721 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:28Z","lastTransitionTime":"2026-01-26T14:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.063667 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:45:39.238327811 +0000 UTC Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.091556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.091558 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:29 crc kubenswrapper[4922]: E0126 14:10:29.091765 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.091612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.091891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.091928 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.091947 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.195099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.195150 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.195159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.195177 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.195188 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.298775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.298833 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.298849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.298874 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.298894 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.402208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.402770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.402840 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.402956 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.403024 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.506328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.506609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.506619 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.506634 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.506644 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.609216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.609266 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.609281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.609305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.609322 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.711699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.711750 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.711758 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.711775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.711784 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.814417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.814458 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.814471 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.814501 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.814527 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.917295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.917341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.917352 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.917370 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:29 crc kubenswrapper[4922]: I0126 14:10:29.917380 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:29Z","lastTransitionTime":"2026-01-26T14:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.020929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.020983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.020998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.021024 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.021035 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.064233 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:32:50.919223749 +0000 UTC Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.092447 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.092554 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.092505 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:30 crc kubenswrapper[4922]: E0126 14:10:30.092742 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:30 crc kubenswrapper[4922]: E0126 14:10:30.092888 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:30 crc kubenswrapper[4922]: E0126 14:10:30.093592 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.123839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.123908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.123924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.123946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.123959 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.227401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.227472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.227487 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.227507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.227521 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.330714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.330759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.330769 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.330798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.330809 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.434849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.435353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.435457 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.435538 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.435612 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.538109 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.538153 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.538161 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.538180 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.538189 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.640347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.640404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.640416 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.640435 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.640445 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.668181 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.676322 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.681011 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.694289 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.714768 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.727659 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.740473 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.742666 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.742729 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.742739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.742760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.742773 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.752904 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.765303 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.778702 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.796304 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.820248 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.839133 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.845251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.845312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.845325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.845346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.845358 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.856747 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.867710 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.881681 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.895203 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.911787 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.932630 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:30Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.948387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.948442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.948456 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.948478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:30 crc kubenswrapper[4922]: I0126 14:10:30.948496 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:30Z","lastTransitionTime":"2026-01-26T14:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.051283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.051329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.051339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.051357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.051375 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.064361 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:29:57.822957233 +0000 UTC Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.092512 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:31 crc kubenswrapper[4922]: E0126 14:10:31.092686 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.093583 4922 scope.go:117] "RemoveContainer" containerID="cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.153398 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.153433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.153441 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.153456 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.153467 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.260320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.260399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.260418 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.260448 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.260467 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.363199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.363242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.363252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.363267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.363277 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.464688 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/1.log" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.466453 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.466504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.466514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.466531 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.466543 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.469285 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.469896 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.486147 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.497876 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.510535 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.534555 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.549034 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.561878 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.569623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.569675 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.569688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.569709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.569722 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.575218 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.588939 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.603735 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.616169 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.632989 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.650180 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.660328 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.671740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.671783 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.671799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.671820 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.671834 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.677696 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.694166 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.711275 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.727353 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.746707 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:31Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.774217 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.774500 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.774563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.774645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.774705 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.878317 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.878689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.878775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.878863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.878943 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.981254 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.981304 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.981316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.981338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:31 crc kubenswrapper[4922]: I0126 14:10:31.981350 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:31Z","lastTransitionTime":"2026-01-26T14:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.064983 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:21:33.895027538 +0000 UTC Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.084613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.084657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.084666 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.084682 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.084693 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.092126 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.092204 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.092253 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:32 crc kubenswrapper[4922]: E0126 14:10:32.092298 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:32 crc kubenswrapper[4922]: E0126 14:10:32.092441 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:32 crc kubenswrapper[4922]: E0126 14:10:32.092580 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.187403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.187442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.187452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.187473 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.187483 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.290783 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.290831 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.290847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.290866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.290881 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.394205 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.394260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.394274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.394291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.394306 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.475521 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/2.log" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.476579 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/1.log" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.480792 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0" exitCode=1 Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.480877 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.480975 4922 scope.go:117] "RemoveContainer" containerID="cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.481923 4922 scope.go:117] "RemoveContainer" containerID="56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0" Jan 26 14:10:32 crc kubenswrapper[4922]: E0126 14:10:32.482228 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.502324 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.503562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.503598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.503610 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.503628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.503641 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.525771 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.542931 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.559298 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.584764 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.599059 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.606589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.606628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.606638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.606656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.606669 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.617638 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.631989 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.642788 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.654050 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.664387 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.680605 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.694473 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.707650 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.709826 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.709873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.709885 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.709908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.709921 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.729587 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.744664 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.761249 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.778006 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.812406 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.812468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.812484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.812507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.812525 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.915230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.915276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.915284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.915299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:32 crc kubenswrapper[4922]: I0126 14:10:32.915309 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:32Z","lastTransitionTime":"2026-01-26T14:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.018105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.018165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.018174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.018195 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.018207 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.065496 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:53:33.059809413 +0000 UTC Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.092347 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:33 crc kubenswrapper[4922]: E0126 14:10:33.092565 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.113972 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.120820 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.120895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.120917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.120946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.120966 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.131366 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.144378 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.157937 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.173877 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.190359 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.207769 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.223384 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.223605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.223705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.223770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.223830 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.226240 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.244491 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd80bd9bd767cf650951b2cbb0006f22bd4363683fc13a120374241e184be189\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:14Z\\\",\\\"message\\\":\\\":10:14.141264 6367 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0126 14:10:14.141320 6367 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:10:14.141375 6367 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:10:14.141444 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0126 14:10:14.141214 6367 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nI0126 14:10:14.141521 6367 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.246786ms\\\\nI0126 14:10:14.140997 6367 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nF0126 14:10:14.141533 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.246799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:33 crc kubenswrapper[4922]: E0126 14:10:33.246958 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:33 crc kubenswrapper[4922]: E0126 14:10:33.247044 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:10:49.247024913 +0000 UTC m=+66.449287685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.263197 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.281847 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.294046 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.315699 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.327631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.327695 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.327707 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.327727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.327739 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.330855 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.345688 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.362252 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.385875 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.404375 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.431681 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.431734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.431752 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.431779 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.431796 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.488550 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/2.log" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.494229 4922 scope.go:117] "RemoveContainer" containerID="56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0" Jan 26 14:10:33 crc kubenswrapper[4922]: E0126 14:10:33.494593 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.517518 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.536121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.536506 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.536520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.536171 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.536548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.536826 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.564250 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.581624 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.597774 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.619709 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.638567 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.639773 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.639814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.639823 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.639841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.639854 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.655321 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.667731 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.681047 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.703145 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.719385 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.736225 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.743307 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.743352 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.743366 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.743386 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.743399 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.758863 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.775442 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.790268 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.808427 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.824461 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.846400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.846455 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.846464 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.846484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.846501 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.965778 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:33 crc kubenswrapper[4922]: E0126 14:10:33.966006 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:33 crc kubenswrapper[4922]: E0126 14:10:33.966150 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:11:05.966124799 +0000 UTC m=+83.168387581 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.968839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.968875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.968887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.968911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:33 crc kubenswrapper[4922]: I0126 14:10:33.968959 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:33Z","lastTransitionTime":"2026-01-26T14:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.066151 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:33:26.775414896 +0000 UTC Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.066315 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.066422 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.066455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.066505 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066588 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:06.066559687 +0000 UTC m=+83.268822519 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066619 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066664 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066684 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066702 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066727 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:11:06.066701211 +0000 UTC m=+83.268964023 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.066765 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:11:06.066747873 +0000 UTC m=+83.269010685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.067340 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.067430 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.067482 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.067827 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:11:06.067760393 +0000 UTC m=+83.270023205 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.074548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.074596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.074609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.074628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.074642 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.092455 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.092510 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.092582 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.092461 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.092708 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:34 crc kubenswrapper[4922]: E0126 14:10:34.092846 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.178172 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.178224 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.178234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.178255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.178268 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.281818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.281875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.281886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.281908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.281919 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.385018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.385101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.385112 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.385136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.385178 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.488290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.488380 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.488415 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.488449 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.488470 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.591885 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.591952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.591969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.591999 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.592017 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.694711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.694757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.694766 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.694782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.694792 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.797612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.797673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.797686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.797708 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.797723 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.900837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.900889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.900898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.900917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:34 crc kubenswrapper[4922]: I0126 14:10:34.900930 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:34Z","lastTransitionTime":"2026-01-26T14:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.004646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.004725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.004743 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.005130 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.005182 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.067242 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:27:51.34228037 +0000 UTC Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.092234 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:35 crc kubenswrapper[4922]: E0126 14:10:35.092441 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.107628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.107684 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.107708 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.107735 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.107750 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.210930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.210998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.211014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.211045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.211089 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.314430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.314498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.314517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.314543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.314562 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.417536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.417572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.417579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.417594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.417604 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.519869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.519913 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.519925 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.519943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.519955 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.623557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.623606 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.623620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.623639 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.623653 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.726687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.726762 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.726775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.726799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.726813 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.829641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.829695 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.829704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.829725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.829737 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.933140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.933204 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.933215 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.933234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:35 crc kubenswrapper[4922]: I0126 14:10:35.933246 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:35Z","lastTransitionTime":"2026-01-26T14:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.036623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.036703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.036721 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.036781 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.036800 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.067472 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:38:05.02843307 +0000 UTC Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.091353 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.091389 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.091353 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.091489 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.091776 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.092142 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.139643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.140157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.140310 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.140450 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.140594 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.243986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.244047 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.244060 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.244108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.244120 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.346894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.346964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.346974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.346993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.347007 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.450016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.450118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.450137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.450167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.450184 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.554362 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.554431 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.554447 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.554473 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.554491 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.657099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.657133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.657143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.657159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.657170 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.759918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.759963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.759972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.759989 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.759999 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.864264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.864338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.864350 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.864372 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.864386 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.866180 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.866263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.866282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.866312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.866332 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.889625 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.895712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.895776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.895788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.895808 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.895822 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.915416 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.920836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.920898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.920917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.920948 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.920966 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.941595 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.948272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.948612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.948802 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.948975 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.949170 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.969924 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.974212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.974253 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.974263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.974289 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.974322 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.989630 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:36 crc kubenswrapper[4922]: E0126 14:10:36.989796 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.991629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.991687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.991698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.991719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:36 crc kubenswrapper[4922]: I0126 14:10:36.991733 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:36Z","lastTransitionTime":"2026-01-26T14:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.068314 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:36:49.992130707 +0000 UTC Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.092508 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:37 crc kubenswrapper[4922]: E0126 14:10:37.092743 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.094308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.094358 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.094374 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.094395 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.094411 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.198419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.198480 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.198492 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.198512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.198523 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.304032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.304113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.304126 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.304156 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.304173 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.406235 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.406279 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.406291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.406312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.406324 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.509375 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.509432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.509450 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.509472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.509486 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.612467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.612517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.612531 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.612555 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.612575 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.715897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.715938 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.715946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.715985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.715999 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.819010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.819059 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.819083 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.819103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.819114 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.922984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.923082 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.923092 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.923110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:37 crc kubenswrapper[4922]: I0126 14:10:37.923147 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:37Z","lastTransitionTime":"2026-01-26T14:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.026146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.026186 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.026219 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.026239 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.026251 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.069456 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:15:35.655465934 +0000 UTC Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.092255 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.092301 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.092326 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:38 crc kubenswrapper[4922]: E0126 14:10:38.092478 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:38 crc kubenswrapper[4922]: E0126 14:10:38.092665 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:38 crc kubenswrapper[4922]: E0126 14:10:38.092831 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.129658 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.129824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.129849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.129921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.129987 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.234250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.234300 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.234311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.234329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.234340 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.338175 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.338702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.338940 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.339277 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.339515 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.443247 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.443332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.443348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.443373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.443389 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.545803 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.545868 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.545880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.545900 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.545913 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.649577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.649640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.649657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.649682 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.649699 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.753517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.753573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.753587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.753607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.753620 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.857284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.857346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.857358 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.857397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.857424 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.960381 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.960450 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.960467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.960493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:38 crc kubenswrapper[4922]: I0126 14:10:38.960511 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:38Z","lastTransitionTime":"2026-01-26T14:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.063703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.063763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.063775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.063797 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.063811 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.069880 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:59:14.913069818 +0000 UTC Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.091571 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:39 crc kubenswrapper[4922]: E0126 14:10:39.091811 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.167459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.167522 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.167541 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.167567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.167586 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.271401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.271446 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.271457 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.271475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.271497 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.374173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.374259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.374273 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.374319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.374336 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.476881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.476919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.476930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.476945 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.476955 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.579648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.579687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.579698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.579716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.579725 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.682359 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.682431 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.682442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.682461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.682474 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.784534 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.784604 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.784626 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.784657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.784682 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.887616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.887658 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.887678 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.887705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.887724 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.991283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.991359 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.991384 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.991413 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:39 crc kubenswrapper[4922]: I0126 14:10:39.991431 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:39Z","lastTransitionTime":"2026-01-26T14:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.070712 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:14:25.670628963 +0000 UTC Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.092168 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.092213 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.092167 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:40 crc kubenswrapper[4922]: E0126 14:10:40.092451 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:40 crc kubenswrapper[4922]: E0126 14:10:40.092578 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:40 crc kubenswrapper[4922]: E0126 14:10:40.092776 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.094465 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.094521 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.094543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.094573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.094596 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.197688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.197760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.197782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.197812 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.197835 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.302553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.302685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.302709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.302743 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.302770 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.406787 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.406849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.406870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.406902 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.406925 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.509523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.509573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.509587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.509605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.509617 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.612411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.612484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.612497 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.612515 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.612528 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.716869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.716929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.716948 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.716972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.716988 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.819482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.819528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.819543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.819561 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.819572 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.922733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.922779 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.922788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.922809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:40 crc kubenswrapper[4922]: I0126 14:10:40.922821 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:40Z","lastTransitionTime":"2026-01-26T14:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.026337 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.026374 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.026385 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.026403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.026416 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.071337 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:53:51.153071535 +0000 UTC Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.092381 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:41 crc kubenswrapper[4922]: E0126 14:10:41.092537 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.129538 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.129579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.129608 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.129630 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.129643 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.232662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.232710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.232720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.232740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.232751 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.335917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.335974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.335983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.335999 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.336010 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.438325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.438374 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.438385 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.438404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.438414 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.540529 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.540584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.540595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.540611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.540622 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.644035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.644139 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.644154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.644181 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.644195 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.746862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.746910 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.746921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.746939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.746951 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.850246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.850288 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.850301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.850344 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.850360 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.953339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.953412 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.953425 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.953445 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:41 crc kubenswrapper[4922]: I0126 14:10:41.953483 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:41Z","lastTransitionTime":"2026-01-26T14:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.058430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.058473 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.058485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.058503 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.058514 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.072576 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:36:50.941180855 +0000 UTC Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.092572 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.092606 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:42 crc kubenswrapper[4922]: E0126 14:10:42.092719 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.092587 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:42 crc kubenswrapper[4922]: E0126 14:10:42.092795 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:42 crc kubenswrapper[4922]: E0126 14:10:42.092843 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.161440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.161499 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.161517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.161547 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.161570 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.264876 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.264971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.264997 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.265030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.265054 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.367493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.367544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.367557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.367579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.367592 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.470711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.470777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.470788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.470826 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.470839 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.573894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.573953 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.573967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.573986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.574002 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.677043 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.677108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.677118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.677134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.677143 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.780265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.780341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.780357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.780382 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.780403 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.883201 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.883251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.883262 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.883282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.883296 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.986590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.986654 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.986672 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.986698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:42 crc kubenswrapper[4922]: I0126 14:10:42.986717 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:42Z","lastTransitionTime":"2026-01-26T14:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.073821 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:09:48.012286772 +0000 UTC Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.089525 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.089572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.089585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.089603 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.089614 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.091528 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:43 crc kubenswrapper[4922]: E0126 14:10:43.091703 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.109762 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.143015 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.163917 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.185955 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.192881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.192928 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.192938 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.192959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.192971 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.203432 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.225622 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.242080 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.261522 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.285594 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.297616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.297671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.297687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.297710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.297727 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.305893 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.321406 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.339765 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.353136 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.366394 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.378506 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.391638 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.399952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.400000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.400012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.400032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.400045 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.405614 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.427253 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.502646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.502716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.502727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.502745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.502759 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.605958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.606037 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.606105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.606133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.606151 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.710359 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.710407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.710420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.710438 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.710453 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.814529 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.814585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.814598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.814621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.814633 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.917850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.917886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.917896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.917910 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:43 crc kubenswrapper[4922]: I0126 14:10:43.917922 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:43Z","lastTransitionTime":"2026-01-26T14:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.019869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.019906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.019914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.019929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.019937 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.074743 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:55:06.57177458 +0000 UTC Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.091448 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.091462 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:44 crc kubenswrapper[4922]: E0126 14:10:44.091589 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:44 crc kubenswrapper[4922]: E0126 14:10:44.091730 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.091481 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:44 crc kubenswrapper[4922]: E0126 14:10:44.091893 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.122974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.123018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.123030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.123045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.123057 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.225754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.226000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.226013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.226030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.226039 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.329091 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.329135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.329145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.329161 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.329172 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.431976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.432013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.432022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.432040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.432050 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.534613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.534673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.534683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.534704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.534717 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.637391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.637438 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.637449 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.637469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.637483 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.743045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.743174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.743208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.743227 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.743239 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.846262 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.846372 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.846432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.846469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.846495 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.949686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.949745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.949759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.949778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:44 crc kubenswrapper[4922]: I0126 14:10:44.949792 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:44Z","lastTransitionTime":"2026-01-26T14:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.053057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.053132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.053145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.053168 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.053182 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.076460 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:16:56.276900851 +0000 UTC Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.092040 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:45 crc kubenswrapper[4922]: E0126 14:10:45.092259 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.156275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.156334 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.156349 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.156369 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.156383 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.259322 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.259364 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.259374 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.259390 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.259400 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.362599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.362957 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.363049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.363204 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.363310 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.466533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.466579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.466594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.466616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.466634 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.569577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.569616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.569626 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.569641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.569652 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.672709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.672781 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.672798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.672824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.672845 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.775911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.775950 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.775960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.775979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.775991 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.878469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.878526 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.878537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.878556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.878572 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.981502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.981555 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.981567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.981586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:45 crc kubenswrapper[4922]: I0126 14:10:45.981601 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:45Z","lastTransitionTime":"2026-01-26T14:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.077349 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:09:55.284424488 +0000 UTC Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.085157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.085208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.085221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.085243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.085256 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.091652 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.091675 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.091665 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:46 crc kubenswrapper[4922]: E0126 14:10:46.091787 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:46 crc kubenswrapper[4922]: E0126 14:10:46.091853 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:46 crc kubenswrapper[4922]: E0126 14:10:46.091925 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.188584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.188644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.188661 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.188687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.188744 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.292387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.292434 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.292442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.292458 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.292473 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.394995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.395036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.395046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.395066 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.395106 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.498249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.498306 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.498329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.498362 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.498384 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.601398 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.601481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.601507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.601540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.601567 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.705459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.705514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.705551 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.705571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.705580 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.809405 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.809551 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.809572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.809599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.809617 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.912806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.912854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.912863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.912890 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:46 crc kubenswrapper[4922]: I0126 14:10:46.912901 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:46Z","lastTransitionTime":"2026-01-26T14:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.015859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.015915 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.015927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.015948 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.015959 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.031542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.031865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.031914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.031940 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.032034 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.049599 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:47Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.053989 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.054024 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.054034 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.054050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.054061 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.072257 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:47Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.077568 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:30:46.922854901 +0000 UTC Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.077724 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.077759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.077768 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.077783 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.077794 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.090851 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:47Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.092285 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.093317 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.099469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.099509 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.099520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.099534 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.099545 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.136799 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:47Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.146425 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.146478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.146492 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.146513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.146527 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.174196 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:47Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:47 crc kubenswrapper[4922]: E0126 14:10:47.174345 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.176285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.176338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.176351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.176373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.176387 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.278908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.278954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.278965 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.278985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.278998 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.381655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.381693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.381704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.381720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.381730 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.484642 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.484718 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.484731 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.484754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.484769 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.588033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.588103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.588116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.588140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.588152 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.691758 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.691798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.691809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.691827 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.691837 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.795635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.795896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.795962 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.796040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.796130 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.898808 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.898864 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.898877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.898899 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:47 crc kubenswrapper[4922]: I0126 14:10:47.898917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:47Z","lastTransitionTime":"2026-01-26T14:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.003096 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.003154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.003164 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.003184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.003195 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.078387 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:35:34.912559685 +0000 UTC Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.091969 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.092033 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.092047 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:48 crc kubenswrapper[4922]: E0126 14:10:48.092264 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:48 crc kubenswrapper[4922]: E0126 14:10:48.092396 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:48 crc kubenswrapper[4922]: E0126 14:10:48.092555 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.106478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.106536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.106548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.106569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.106580 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.209545 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.209589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.209601 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.209623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.209637 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.313110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.313165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.313175 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.313194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.313205 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.416250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.416595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.416712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.417016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.417249 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.520919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.520971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.520985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.521007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.521021 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.624233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.624271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.624283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.624298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.624310 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.727125 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.727193 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.727207 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.727237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.727258 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.830219 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.830269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.830280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.830298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.830311 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.933519 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.933581 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.933591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.933610 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:48 crc kubenswrapper[4922]: I0126 14:10:48.933624 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:48Z","lastTransitionTime":"2026-01-26T14:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.036967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.037017 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.037030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.037051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.037084 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.078906 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:12:36.587943687 +0000 UTC Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.092405 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:49 crc kubenswrapper[4922]: E0126 14:10:49.093045 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.094316 4922 scope.go:117] "RemoveContainer" containerID="56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0" Jan 26 14:10:49 crc kubenswrapper[4922]: E0126 14:10:49.094731 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.139682 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.139749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.139765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.139789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.139803 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.243712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.243774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.243789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.243813 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.243831 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.248597 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:49 crc kubenswrapper[4922]: E0126 14:10:49.248764 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:49 crc kubenswrapper[4922]: E0126 14:10:49.248883 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:11:21.248853493 +0000 UTC m=+98.451116265 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.346748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.346832 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.346845 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.346864 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.346878 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.449234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.449300 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.449321 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.449348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.449367 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.551746 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.551813 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.551824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.551844 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.551857 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.554325 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/0.log" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.554384 4922 generic.go:334] "Generic (PLEG): container finished" podID="103e8f62-57c7-4d49-b740-16d357710e61" containerID="92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197" exitCode=1 Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.554425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerDied","Data":"92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.554915 4922 scope.go:117] "RemoveContainer" containerID="92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.572428 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.587865 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.602696 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.617201 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.626389 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.636524 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.650384 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.655594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.655651 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.655664 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.655687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.655702 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.666928 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.680971 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.701197 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.725239 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.740129 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.758315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.758365 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.758380 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.758401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.758413 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.765409 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.782388 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.795525 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.809561 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.824172 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.840221 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:49Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.868196 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.868244 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.868255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.868275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.868289 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.971673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.971724 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.971736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.971755 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:49 crc kubenswrapper[4922]: I0126 14:10:49.971768 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:49Z","lastTransitionTime":"2026-01-26T14:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.074651 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.074712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.074725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.074744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.074756 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.079884 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:37:25.533028341 +0000 UTC Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.092282 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.092323 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.092380 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:50 crc kubenswrapper[4922]: E0126 14:10:50.092513 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:50 crc kubenswrapper[4922]: E0126 14:10:50.092643 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:50 crc kubenswrapper[4922]: E0126 14:10:50.092789 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.178151 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.178210 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.178225 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.178255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.178272 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.281202 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.281684 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.281694 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.281711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.281723 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.384655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.384697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.384706 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.384727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.384739 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.487364 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.487420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.487431 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.487452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.487466 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.560960 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/0.log" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.561040 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerStarted","Data":"092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.577881 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.590137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.590201 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.590212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.590237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.590248 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.592575 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.606298 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.617747 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.641937 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.657637 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.672549 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.687756 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.693016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.693056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.693082 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.693099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.693111 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.704473 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.716681 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.729319 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.747162 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.764105 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.779466 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.795898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.795952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.795969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.795996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.796013 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.804689 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.822621 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.841717 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.861621 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:50Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.899893 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.899943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.899954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.899972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:50 crc kubenswrapper[4922]: I0126 14:10:50.899984 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:50Z","lastTransitionTime":"2026-01-26T14:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.002915 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.002964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.002973 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.002991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.003002 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.080767 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:59:07.162106169 +0000 UTC Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.092403 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:51 crc kubenswrapper[4922]: E0126 14:10:51.092581 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.106431 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.106548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.106577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.106608 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.106632 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.210352 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.210403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.210415 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.210436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.210448 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.313494 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.313567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.313577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.313593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.313606 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.416714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.416754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.416765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.416785 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.416798 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.519751 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.519818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.519838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.519864 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.519882 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.622918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.622974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.622984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.623007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.623019 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.726625 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.726984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.727064 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.727169 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.727253 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.830959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.831006 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.831016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.831034 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.831046 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.934598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.934675 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.934691 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.934713 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:51 crc kubenswrapper[4922]: I0126 14:10:51.934728 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:51Z","lastTransitionTime":"2026-01-26T14:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.038620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.038690 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.038714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.038749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.038774 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.081163 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:52:35.370877766 +0000 UTC Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.091703 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.091765 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.091729 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:52 crc kubenswrapper[4922]: E0126 14:10:52.091933 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:52 crc kubenswrapper[4922]: E0126 14:10:52.092113 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:52 crc kubenswrapper[4922]: E0126 14:10:52.092260 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.141884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.141941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.141961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.141983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.142002 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.245996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.246048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.246106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.246127 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.246143 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.350355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.350411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.350429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.350456 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.350476 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.454163 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.454220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.454229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.454247 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.454258 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.557824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.557922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.557939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.557961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.557975 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.661949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.662454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.662612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.662736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.662823 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.765736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.765803 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.765827 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.765861 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.765882 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.868932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.869285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.869424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.869523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.869623 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.972621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.972671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.972684 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.972702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:52 crc kubenswrapper[4922]: I0126 14:10:52.972714 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:52Z","lastTransitionTime":"2026-01-26T14:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.076134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.076181 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.076195 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.076218 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.076232 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.081342 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:49:12.168961943 +0000 UTC Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.092263 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:53 crc kubenswrapper[4922]: E0126 14:10:53.092435 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.116177 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.130484 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.144507 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.167230 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.178571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.178602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.178611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.178627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.178640 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.181856 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.207918 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.231953 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.254560 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.271783 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.281935 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.282013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.282038 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.282128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.282155 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.287560 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.303142 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.317746 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.331909 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.355790 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.373592 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.385054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.385113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.385125 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.385145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.385159 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.389241 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.405092 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.422236 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.488230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.488277 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.488295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.488321 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.488339 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.591403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.591469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.591484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.591510 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.591529 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.695845 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.695932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.695954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.695985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.696004 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.798535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.798567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.798579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.798595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.798605 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.900842 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.900898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.900911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.900934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:53 crc kubenswrapper[4922]: I0126 14:10:53.900949 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:53Z","lastTransitionTime":"2026-01-26T14:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.005161 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.005199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.005209 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.005225 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.005235 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.082163 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:59:25.715479042 +0000 UTC Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.091723 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.091728 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:54 crc kubenswrapper[4922]: E0126 14:10:54.092048 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:54 crc kubenswrapper[4922]: E0126 14:10:54.092199 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.092494 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:54 crc kubenswrapper[4922]: E0126 14:10:54.092645 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.106342 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.107777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.107834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.107853 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.107881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.107897 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.210956 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.211010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.211022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.211044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.211084 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.313374 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.313424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.313440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.313459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.313473 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.416813 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.417245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.417387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.417522 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.417667 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.521269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.521751 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.521949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.522216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.522449 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.625369 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.625702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.625847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.625942 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.626028 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.729422 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.729843 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.729959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.730106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.730686 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.834416 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.834464 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.834475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.834494 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.834505 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.937546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.937612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.937624 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.937649 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:54 crc kubenswrapper[4922]: I0126 14:10:54.937668 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:54Z","lastTransitionTime":"2026-01-26T14:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.040994 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.041048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.041060 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.041097 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.041108 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.083318 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:01:52.025103723 +0000 UTC Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.091934 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:55 crc kubenswrapper[4922]: E0126 14:10:55.092259 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.145184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.145589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.145715 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.145820 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.145889 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.249150 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.249224 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.249247 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.249278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.249296 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.352233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.352582 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.352657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.352731 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.352793 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.455263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.455631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.455731 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.455823 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.455921 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.559221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.559296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.559318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.559355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.559380 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.662865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.663327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.663429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.663560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.663666 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.766820 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.766901 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.766918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.766946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.766964 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.870154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.870210 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.870221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.870245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.870257 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.973188 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.973242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.973255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.973278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:55 crc kubenswrapper[4922]: I0126 14:10:55.973295 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:55Z","lastTransitionTime":"2026-01-26T14:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.075871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.075912 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.075922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.075939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.075949 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.084246 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:54:05.980814017 +0000 UTC Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.091607 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.091679 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.091699 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:56 crc kubenswrapper[4922]: E0126 14:10:56.091804 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:56 crc kubenswrapper[4922]: E0126 14:10:56.091934 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:56 crc kubenswrapper[4922]: E0126 14:10:56.092115 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.178673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.178764 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.178786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.178818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.178843 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.282316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.282347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.282355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.282372 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.282382 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.385740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.385796 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.385814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.385840 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.385860 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.489439 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.489485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.489498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.489517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.489529 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.593094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.593445 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.593556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.593655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.593747 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.696866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.696914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.696924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.696942 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.696955 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.800028 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.800092 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.800101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.800119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.800130 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.903338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.903401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.903415 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.903437 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:56 crc kubenswrapper[4922]: I0126 14:10:56.903451 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:56Z","lastTransitionTime":"2026-01-26T14:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.006971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.007047 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.007089 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.007116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.007135 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.085043 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:19:00.485136539 +0000 UTC Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.091620 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.091921 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.110527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.110581 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.110600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.110623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.110642 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.214373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.214427 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.214462 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.214485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.214498 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.229013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.229131 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.229159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.229190 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.229214 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.248285 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:57Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.253896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.253937 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.253956 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.253980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.253999 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.271914 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:57Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.277009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.277049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.277097 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.277123 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.277140 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.292676 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:57Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.298849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.298888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.298905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.298924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.298941 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.316130 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:57Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.320805 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.320949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.320967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.320990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.321006 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.339022 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:10:57Z is after 2025-08-24T17:21:41Z" Jan 26 14:10:57 crc kubenswrapper[4922]: E0126 14:10:57.339331 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.341838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.341879 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.341895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.341916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.341932 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.444737 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.444794 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.444811 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.444838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.444857 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.547908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.547958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.547974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.547999 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.548015 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.650976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.651036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.651056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.651108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.651128 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.753919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.753996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.754053 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.754133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.754162 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.857178 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.857237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.857249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.857275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.857291 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.960147 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.960202 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.960213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.960234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:57 crc kubenswrapper[4922]: I0126 14:10:57.960248 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:57Z","lastTransitionTime":"2026-01-26T14:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.062384 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.062449 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.062461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.062478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.062506 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.085618 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:47:44.69320763 +0000 UTC Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.092075 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.092130 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.092104 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:10:58 crc kubenswrapper[4922]: E0126 14:10:58.092241 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:10:58 crc kubenswrapper[4922]: E0126 14:10:58.092475 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:10:58 crc kubenswrapper[4922]: E0126 14:10:58.092618 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.165157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.165242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.165267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.165294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.165311 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.268854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.268953 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.268977 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.269011 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.269033 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.371508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.371584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.371598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.371617 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.371634 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.474428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.474490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.474508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.474533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.474552 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.577887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.577952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.577978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.578012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.578035 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.681683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.681735 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.681744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.681763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.681774 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.785272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.785341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.785359 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.785389 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.785412 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.888344 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.888564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.888622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.888648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.888661 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.991113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.991189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.991212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.991243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:58 crc kubenswrapper[4922]: I0126 14:10:58.991271 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:58Z","lastTransitionTime":"2026-01-26T14:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.086580 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:48:27.322948284 +0000 UTC Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.092253 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:10:59 crc kubenswrapper[4922]: E0126 14:10:59.092875 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.094194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.094272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.094299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.094329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.094352 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.198663 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.198749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.198769 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.198812 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.198833 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.302222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.302303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.302327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.302361 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.302384 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.406324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.406830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.406947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.407054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.407187 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.511346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.511417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.511436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.511468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.511488 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.615105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.615191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.615212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.615242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.615260 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.717921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.717992 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.718009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.718038 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.718058 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.821609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.821765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.821796 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.821878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.821899 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.924741 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.924781 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.924791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.924810 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:10:59 crc kubenswrapper[4922]: I0126 14:10:59.924820 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:10:59Z","lastTransitionTime":"2026-01-26T14:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.027726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.027770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.027781 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.027802 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.027818 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.087491 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:26:09.31803842 +0000 UTC Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.091927 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.092033 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.092097 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:00 crc kubenswrapper[4922]: E0126 14:11:00.092233 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:00 crc kubenswrapper[4922]: E0126 14:11:00.092651 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:00 crc kubenswrapper[4922]: E0126 14:11:00.092761 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.093052 4922 scope.go:117] "RemoveContainer" containerID="56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.130215 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.130258 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.130267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.130283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.130294 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.234059 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.234135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.234149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.234184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.234199 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.338575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.338624 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.338635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.338670 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.338684 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.441555 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.441621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.441640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.441667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.441690 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.545423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.545831 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.545968 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.546134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.546275 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.600105 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/2.log" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.602231 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.603568 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.615924 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.631247 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.644014 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.648922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.648976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.648985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.649015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.649026 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.659504 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.677051 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.698690 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.725913 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.741646 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.754872 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.790207 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.791250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.791299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.791311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.791334 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.791345 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.805723 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.834545 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.851454 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.867091 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.894598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.894684 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.894700 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.894745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.894763 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.895568 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.914674 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.935991 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.950267 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0accf413-9e1c-4104-830a-6700b94027bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f85a98054e53886511d2b982872884c925f3331ec72172233c1e15f36d2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.965881 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:00Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.997286 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.997352 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.997363 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.997384 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:00 crc kubenswrapper[4922]: I0126 14:11:00.997399 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:00Z","lastTransitionTime":"2026-01-26T14:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.087801 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:14:06.965919049 +0000 UTC Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.092357 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:01 crc kubenswrapper[4922]: E0126 14:11:01.092536 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.100091 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.100135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.100146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.100166 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.100176 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.203123 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.203493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.203507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.203528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.203542 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.305832 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.305892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.305904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.305928 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.305942 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.409153 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.409199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.409208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.409226 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.409242 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.511908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.511953 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.511961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.511978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.511989 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.614979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.615096 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.615121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.615152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.615178 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.718475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.718546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.718565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.718594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.718614 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.822167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.822238 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.822255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.822280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.822299 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.925574 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.925671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.925695 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.925739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:01 crc kubenswrapper[4922]: I0126 14:11:01.925761 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:01Z","lastTransitionTime":"2026-01-26T14:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.028806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.028871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.028885 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.028905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.028920 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.088638 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:51:10.821991179 +0000 UTC Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.091939 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.092032 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.092161 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:02 crc kubenswrapper[4922]: E0126 14:11:02.092407 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:02 crc kubenswrapper[4922]: E0126 14:11:02.092502 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:02 crc kubenswrapper[4922]: E0126 14:11:02.092672 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.131977 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.132046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.132088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.132121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.132140 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.235332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.235404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.235423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.235452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.235474 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.339183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.339249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.339265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.339294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.339312 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.441475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.441533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.441553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.441577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.441595 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.544904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.544999 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.545017 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.545045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.545088 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.611656 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/3.log" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.612637 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/2.log" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.616299 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" exitCode=1 Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.616359 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.616413 4922 scope.go:117] "RemoveContainer" containerID="56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.618005 4922 scope.go:117] "RemoveContainer" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" Jan 26 14:11:02 crc kubenswrapper[4922]: E0126 14:11:02.618380 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.635272 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.647648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.647735 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.647777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.647802 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.647820 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.651476 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.672186 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:11:01Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:11:01.365595 6980 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:11:01.365633 6980 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:11:01.365668 6980 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:11:01.365760 6980 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:11:01.365848 6980 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:11:01.366266 6980 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:11:01.366409 6980 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:11:01.366451 6980 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:11:01.366483 6980 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:11:01.366564 6980 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.686989 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.701844 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.718781 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.732573 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.746291 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.754681 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.754734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.754749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.754770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.754783 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.758258 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.771549 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.795946 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.809482 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.822140 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.833260 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0accf413-9e1c-4104-830a-6700b94027bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f85a98054e53886511d2b982872884c925f3331ec72172233c1e15f36d2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.844951 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.853496 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.856929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.856966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.856977 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.856998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.857011 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.864631 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.877664 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.894484 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:02Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.961236 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.961316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.961353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.961377 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:02 crc kubenswrapper[4922]: I0126 14:11:02.961389 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:02Z","lastTransitionTime":"2026-01-26T14:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.064490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.064571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.064586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.064609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.064646 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.088783 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:26:17.117874418 +0000 UTC Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.092386 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:03 crc kubenswrapper[4922]: E0126 14:11:03.092584 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.122051 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:11:01Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:11:01.365595 6980 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:11:01.365633 6980 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:11:01.365668 6980 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:11:01.365760 6980 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:11:01.365848 6980 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:11:01.366266 6980 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:11:01.366409 6980 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:11:01.366451 6980 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:11:01.366483 6980 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:11:01.366564 6980 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.139520 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.153110 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.168219 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.168254 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.168266 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.168284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.168295 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.172124 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.184626 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.196371 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.206447 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.220984 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.242924 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.256791 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.271078 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.271134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.271148 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.271167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.271180 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.271393 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.283724 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0accf413-9e1c-4104-830a-6700b94027bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f85a98054e53886511d2b982872884c925f3331ec72172233c1e15f36d2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.301925 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.318961 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.332120 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.348003 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.367823 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.373389 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.373426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.373442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.373467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.373482 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.412619 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.436154 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.476576 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.476643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.476668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.476699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.476717 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.580499 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.580863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.581105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.581179 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.581248 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.622619 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/3.log" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.684387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.684440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.686447 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.686475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.686489 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.789760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.789805 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.789816 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.789834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.789844 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.892980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.893086 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.893100 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.893120 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.893133 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.996056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.996155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.996167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.996191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:03 crc kubenswrapper[4922]: I0126 14:11:03.996206 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:03Z","lastTransitionTime":"2026-01-26T14:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.089175 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:22:21.881763947 +0000 UTC Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.091585 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.091672 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.091821 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:04 crc kubenswrapper[4922]: E0126 14:11:04.091922 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:04 crc kubenswrapper[4922]: E0126 14:11:04.092123 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:04 crc kubenswrapper[4922]: E0126 14:11:04.092231 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.098887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.098924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.098934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.098954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.098965 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.202233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.202623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.202719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.202873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.203022 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.306726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.306840 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.306862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.306889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.306907 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.409630 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.409698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.409717 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.409744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.409762 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.513250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.513315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.513333 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.513358 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.513374 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.616970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.617022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.617030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.617050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.617093 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.719951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.719997 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.720010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.720032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.720047 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.823741 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.823791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.823805 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.823843 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.823854 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.927227 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.927292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.927303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.927343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:04 crc kubenswrapper[4922]: I0126 14:11:04.927358 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:04Z","lastTransitionTime":"2026-01-26T14:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.030184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.030222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.030251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.030272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.030285 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.089649 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:18:25.24403183 +0000 UTC Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.092103 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:05 crc kubenswrapper[4922]: E0126 14:11:05.092347 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.133859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.133905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.133918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.133961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.133974 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.237797 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.237863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.237881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.237908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.237927 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.340474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.340533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.340546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.340568 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.340590 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.443293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.443328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.443340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.443359 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.443371 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.546140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.546189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.546203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.546223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.546236 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.648179 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.648259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.648278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.648316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.648339 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.751442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.751493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.751507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.751527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.751541 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.854021 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.854084 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.854098 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.854117 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.854130 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.957361 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.957921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.957930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.957953 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:05 crc kubenswrapper[4922]: I0126 14:11:05.957963 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:05Z","lastTransitionTime":"2026-01-26T14:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.047037 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.047294 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.047449 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.047411321 +0000 UTC m=+147.249674133 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.060779 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.060834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.060851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.060881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.060900 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.090005 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:08:12.562045526 +0000 UTC Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.092298 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.092437 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.092337 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.092711 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.093352 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.092860 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.148288 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.148485 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.148560 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.148510798 +0000 UTC m=+147.350773570 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.148681 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.148709 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.148732 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.148729 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.148810 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.148783006 +0000 UTC m=+147.351045808 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.148861 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.148960 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.149013 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.148999793 +0000 UTC m=+147.351262605 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.149045 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.149131 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.149152 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:11:06 crc kubenswrapper[4922]: E0126 14:11:06.149238 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.149208379 +0000 UTC m=+147.351471331 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.164907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.164970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.164991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.165018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.165039 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.268991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.269112 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.269140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.269172 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.269195 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.372782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.372866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.372881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.372901 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.372914 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.476317 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.476383 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.476405 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.476428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.476443 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.580225 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.580298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.580325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.580357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.580378 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.684028 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.684146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.684168 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.684197 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.684216 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.787177 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.787247 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.787264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.787292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.787311 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.890095 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.890131 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.890141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.890159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.890169 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.993947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.994033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.994055 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.994128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:06 crc kubenswrapper[4922]: I0126 14:11:06.994151 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:06Z","lastTransitionTime":"2026-01-26T14:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.091040 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:27:36.264059066 +0000 UTC Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.091543 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.091789 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.097930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.098019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.098054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.098129 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.098151 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.201562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.201627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.201645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.201671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.201694 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.304829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.304890 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.304907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.304934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.304951 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.408121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.408190 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.408213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.408245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.408267 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.431788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.431834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.431845 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.431865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.431880 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.455247 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.460143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.460216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.460232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.460282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.460300 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.476771 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.482043 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.482096 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.482107 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.482124 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.482136 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.499208 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.503752 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.503787 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.503799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.503821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.503838 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.519177 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.524525 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.524576 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.524587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.524611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.524622 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.537227 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:11:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d465894b-675b-4495-9485-a609c23a81b4\\\",\\\"systemUUID\\\":\\\"e5a8e8c1-3ae9-423e-89aa-88a14e24c694\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:07 crc kubenswrapper[4922]: E0126 14:11:07.537355 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.539375 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.539440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.539452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.539505 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.539522 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.642190 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.642239 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.642250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.642268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.642281 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.745621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.745668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.745683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.745709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.745727 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.848619 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.848685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.848705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.848733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.848753 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.951489 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.951550 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.951564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.951587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:07 crc kubenswrapper[4922]: I0126 14:11:07.951603 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:07Z","lastTransitionTime":"2026-01-26T14:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.055196 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.055297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.055323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.055361 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.055383 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.091942 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:22:42.805182754 +0000 UTC Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.092316 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.092357 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.092339 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:08 crc kubenswrapper[4922]: E0126 14:11:08.092543 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:08 crc kubenswrapper[4922]: E0126 14:11:08.092644 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:08 crc kubenswrapper[4922]: E0126 14:11:08.094023 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.159869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.159938 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.159952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.160222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.160244 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.262646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.262710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.262726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.262751 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.262770 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.365466 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.365499 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.365507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.365522 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.365532 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.467673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.467716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.467725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.467742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.467753 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.570876 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.570951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.570966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.570993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.571026 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.674791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.674866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.674883 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.674908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.674926 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.777665 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.777729 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.777747 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.777775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.777793 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.881404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.881474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.881493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.881520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.881542 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.984981 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.985102 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.985120 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.985148 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:08 crc kubenswrapper[4922]: I0126 14:11:08.985166 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:08Z","lastTransitionTime":"2026-01-26T14:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.088094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.088151 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.088165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.088183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.088194 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.091992 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:09 crc kubenswrapper[4922]: E0126 14:11:09.092257 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.092332 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:12:22.236144603 +0000 UTC Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.192822 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.192882 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.192895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.192921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.192935 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.295775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.295887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.295898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.295916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.295932 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.399845 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.399916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.399930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.399954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.399971 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.504312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.504701 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.504713 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.504756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.504771 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.607973 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.608040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.608051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.608103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.608130 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.711124 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.711203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.711222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.711249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.711269 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.814580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.814635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.814647 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.814673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.814687 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.917698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.917748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.917764 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.917787 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:09 crc kubenswrapper[4922]: I0126 14:11:09.917803 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:09Z","lastTransitionTime":"2026-01-26T14:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.021215 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.021263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.021276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.021295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.021309 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.092027 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.092556 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:41:38.074465946 +0000 UTC Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.092205 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.092146 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:10 crc kubenswrapper[4922]: E0126 14:11:10.093434 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:10 crc kubenswrapper[4922]: E0126 14:11:10.093379 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:10 crc kubenswrapper[4922]: E0126 14:11:10.093532 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.125109 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.125182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.125199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.125226 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.125245 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.228538 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.228600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.228613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.228639 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.228656 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.331786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.331836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.331848 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.331867 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.331884 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.435258 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.435316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.435328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.435348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.435364 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.538013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.538057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.538092 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.538111 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.538125 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.641889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.641943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.641954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.641984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.641996 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.745140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.745237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.745261 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.745290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.745314 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.849350 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.849440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.849468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.849502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.849528 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.953973 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.954057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.954110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.954144 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:10 crc kubenswrapper[4922]: I0126 14:11:10.954163 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:10Z","lastTransitionTime":"2026-01-26T14:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.057435 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.057474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.057483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.057498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.057509 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.092374 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:11 crc kubenswrapper[4922]: E0126 14:11:11.092676 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.092700 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:38:04.524382843 +0000 UTC Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.161768 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.161850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.161874 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.161975 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.162135 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.264865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.264946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.264960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.264983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.265023 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.367777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.367827 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.367839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.367858 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.367875 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.471481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.471565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.471580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.471607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.471621 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.574520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.574564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.574593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.574608 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.574618 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.676898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.676940 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.676951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.676966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.676978 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.779488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.779567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.779575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.779591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.779602 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.882482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.882540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.882553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.882574 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.882586 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.985943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.985984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.985995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.986010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:11 crc kubenswrapper[4922]: I0126 14:11:11.986021 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:11Z","lastTransitionTime":"2026-01-26T14:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.088791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.088851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.088871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.088901 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.088921 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.092143 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.092224 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:12 crc kubenswrapper[4922]: E0126 14:11:12.092281 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.092353 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:12 crc kubenswrapper[4922]: E0126 14:11:12.092384 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:12 crc kubenswrapper[4922]: E0126 14:11:12.092583 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.092907 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:56:14.620816307 +0000 UTC Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.192208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.192275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.192297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.192327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.192349 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.295831 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.295881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.295898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.295924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.295944 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.398780 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.398878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.398895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.398922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.398946 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.502190 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.502245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.502258 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.502275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.502287 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.605336 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.605434 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.605451 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.605477 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.605498 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.708333 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.708388 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.708399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.708422 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.708433 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.811936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.812015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.812029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.812055 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.812168 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.915356 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.915433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.915452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.915479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:12 crc kubenswrapper[4922]: I0126 14:11:12.915502 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:12Z","lastTransitionTime":"2026-01-26T14:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.019183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.019263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.019287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.019317 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.019339 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.091740 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:13 crc kubenswrapper[4922]: E0126 14:11:13.092016 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.093724 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:27:56.897695846 +0000 UTC Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.112764 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.122672 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.122778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.122794 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.122815 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.122828 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.135919 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.153432 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.168637 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.190947 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.212298 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.224543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.224583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.224595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.224617 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.224632 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.236756 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.258346 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f795766200f23a07fd4ef463b5d19333c7af8e6931798e31a087cc3dc6bcc0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:32Z\\\",\\\"message\\\":\\\"17ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:10:32.079791 6579 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 14:10:32.079712 6579 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:11:01Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:11:01.365595 6980 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:11:01.365633 6980 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:11:01.365668 6980 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:11:01.365760 6980 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:11:01.365848 6980 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:11:01.366266 6980 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:11:01.366409 6980 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:11:01.366451 6980 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:11:01.366483 6980 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:11:01.366564 6980 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.280842 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.299448 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.318629 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.327488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.327537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.327549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.327570 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.327590 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.335112 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.355717 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.372167 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.387473 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.417638 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.430717 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.430786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.430799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.430822 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.430834 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.439018 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.462267 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.475703 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0accf413-9e1c-4104-830a-6700b94027bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f85a98054e53886511d2b982872884c925f3331ec72172233c1e15f36d2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.533865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.533923 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.533939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.533965 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.533977 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.637116 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.637363 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.637396 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.637424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.637465 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.740511 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.740574 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.740597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.740629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.740652 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.844092 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.844173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.844194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.844219 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.844237 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.948963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.949052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.949115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.949150 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:13 crc kubenswrapper[4922]: I0126 14:11:13.949244 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:13Z","lastTransitionTime":"2026-01-26T14:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.053324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.053381 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.053392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.053413 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.053425 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.092016 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.092043 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:14 crc kubenswrapper[4922]: E0126 14:11:14.092225 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.092035 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:14 crc kubenswrapper[4922]: E0126 14:11:14.092343 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:14 crc kubenswrapper[4922]: E0126 14:11:14.092499 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.094152 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:02:37.569365791 +0000 UTC Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.156367 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.156448 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.156460 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.156483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.156497 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.259028 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.259130 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.259150 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.259177 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.259197 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.361562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.361625 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.361635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.361648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.361658 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.464906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.465004 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.465019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.465039 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.465054 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.568505 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.568585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.568605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.568634 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.568656 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.671837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.671897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.671914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.671933 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.671964 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.774961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.775030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.775048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.775109 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.775138 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.878297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.878371 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.878389 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.878756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.878972 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.982250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.982331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.982345 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.982367 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:14 crc kubenswrapper[4922]: I0126 14:11:14.982380 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:14Z","lastTransitionTime":"2026-01-26T14:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.084797 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.084873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.084890 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.084918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.084932 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.092383 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:15 crc kubenswrapper[4922]: E0126 14:11:15.092568 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.093238 4922 scope.go:117] "RemoveContainer" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" Jan 26 14:11:15 crc kubenswrapper[4922]: E0126 14:11:15.093614 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.095251 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:56:04.016847067 +0000 UTC Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.111007 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8w5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a511a19d-84dc-4136-84e9-2060471c1fa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849d4ba5335f2b11d91361ec69242a0858f45a86cb51be8e57e9d57af7adc2ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m92xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8w5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.131690 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb3fd63f-eedf-4790-88f6-325e446b37c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffb5b2bf2d6b3501905c70aec93b706021e194eff95c2b308a43e2c8a3a068e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aadcd5274a844f9376b357120e508c665b26c3b103c5b259e37cf0529460f560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thvb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfbd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.159245 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1232c3-80be-4ded-ac72-3e5ac1ffa00d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e4b097428bf223d7b43b6f558824e2558a4e9e86a702e6da44c3ea0ac7ecdc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ee9b51e37e91e1362237a40568e4502fdf97c7ad3328742283992e6a0000b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0214ef40cf91aafec9a3c3a577f099fd534fdd31d8edb66ff5f29b0eed1cd31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4633b95781b46ef6f87b24ecaac66262bf743067f4260fc03c17aff24a84458d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6707d8f0ae1a6c8790eadab27e2cbb1941badb2f930abdb946b10637a91ba540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e07ad130eeeeea9211208ff92b87b991264f69a27c5e110fcea845a37d5ee542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6674aea9df7c90f91e8813917f192746b2e6158e142ffd1669c1252ffc726ef8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b9577273d4d13de06542d2d3ee860ee085b72ce3aef80a2b652e9ea8f006c7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.178696 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1a249a-076c-4808-97f9-12ecbaa07163\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4067a5bc337beb5eb6dec1ca1a9af375691f89a27948e9068620e5b894a898cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff18555cc856f4feb1a392e127e47390ccd66584988056ad0b0541bc0976d903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7e10177f56af00e63c29a3e848de844a5d540632f8b162835189a8bde64a87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.187669 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.187718 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.187734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.187756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.187770 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.195719 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d937365-993a-4263-bcbe-3fe486b4352d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d189b8ef703bd80ea89da4f678dc03dc5529f6e5e040297943d483077b4926a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0be8b96b89f93aaf1a0f15e98e2d94f540c1d601663191d47867674d6f245b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56a3863fa92d532aa7396b8a2fe367db6a9759330f6fef3e07d90a9558bd9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b639fb07015de13c789c10158ed92ff33cbf899acf11ddc174dfe9681e185a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.208053 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.222839 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.235576 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0accf413-9e1c-4104-830a-6700b94027bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f8f85a98054e53886511d2b982872884c925f3331ec72172233c1e15f36d2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382ca1bdc9183c2d4ed01dd819398951e033daec1b994757e3853f640ee26c40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.252650 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f20126294491782022bca578609b920621a40eb534d77b6a83633d4021c4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded3095b662bcf9e7ae3269451f8e369a77a3990bd6355c715ec309b1dd60c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.272316 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zx7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"103e8f62-57c7-4d49-b740-16d357710e61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:10:49Z\\\",\\\"message\\\":\\\"2026-01-26T14:10:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616\\\\n2026-01-26T14:10:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ec41588-c8a2-4d5f-bc5d-a8bb8d800616 to /host/opt/cni/bin/\\\\n2026-01-26T14:10:04Z [verbose] multus-daemon started\\\\n2026-01-26T14:10:04Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:10:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppvjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zx7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.291094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.291152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.291165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.291187 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.291202 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.297970 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30ef84c6-ac27-443b-a9a7-37596edecde6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 14:09:56.627926 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 14:09:56.630412 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1712713603/tls.crt::/tmp/serving-cert-1712713603/tls.key\\\\\\\"\\\\nI0126 14:10:02.167380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 14:10:02.180566 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 14:10:02.180603 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 14:10:02.180867 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 14:10:02.180877 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 14:10:02.214839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 14:10:02.214879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 14:10:02.214892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 14:10:02.214897 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 14:10:02.214908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 14:10:02.214912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 14:10:02.215317 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 14:10:02.220829 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:09:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.319021 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190a433b489aaf4b8fa119921a9ebac1ce18e8156f73464198dc575810f11d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.335907 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tr7ks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8907acd9-6134-47b2-b97c-dd03dea18383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://086a87807c6d54a89b58524006d1cd7423a3b99b59081767c4771a788ff15287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xbrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tr7ks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.349919 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d729a48f-6c8a-41a2-82f0-336269ebbfc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://064db40d1548d6e56fb9efbd81ae3c2399dd12e45182cd92cd4a0e341fde93fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk4dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g5x8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.365797 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"756187f6-68ea-4408-8d07-f691e16b4484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z87h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pzxnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.392218 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c98c97b0f83e3883d1f949ec3d72e7c25828309c333ab298cf68c583ac9ac71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.394465 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.394565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.394582 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.394602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.394615 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.409807 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.426384 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-52ctw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1c927f4-1d72-49fa-b6fd-9390de6d00d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da24353ea5c76213f58d4849a9dcecd56d145957cfc24204bf4f1186a2f054c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b71facba99312da4aceb0e7bff75fab676df49df86757af92c7d6c2105284c5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39fa1cb48b915d0be229b30bfd3871e30b246c862f612c990a058ab7f210781\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3ea0631fc7a8126b31d5fc8f0332abd19783299dc7442e5ea71a5df1cbb6425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd989455839b2bfbeaeb35200c84b154044836609ff6e384a0cf0326e37c88cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://336cbe346a5921078a86006376ed964053d12bfdb30ca559f283035e23ddf249\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b1086a945102d20b5dffa936ae0e30d29a197f3123556675489cb113570a6b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxf58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-52ctw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.452615 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec4defeb-f2b0-4291-9147-b37e5c43da57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:11:01Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:11:01.365595 6980 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:11:01.365633 6980 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:11:01.365668 6980 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:11:01.365760 6980 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:11:01.365848 6980 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:11:01.366266 6980 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:11:01.366409 6980 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:11:01.366451 6980 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:11:01.366483 6980 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:11:01.366564 6980 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:11:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m7cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5m7p9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:11:15Z is after 2025-08-24T17:21:41Z" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.498229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.498278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.498291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.498312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.498325 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.601541 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.601602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.601617 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.601638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.601656 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.705034 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.705159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.705185 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.705221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.705243 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.809690 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.809782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.809806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.809838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.809868 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.912675 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.912736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.912750 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.912774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:15 crc kubenswrapper[4922]: I0126 14:11:15.912789 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:15Z","lastTransitionTime":"2026-01-26T14:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.016793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.016863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.016878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.016904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.016918 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.091887 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.091948 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.092157 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:16 crc kubenswrapper[4922]: E0126 14:11:16.092238 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:16 crc kubenswrapper[4922]: E0126 14:11:16.092406 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:16 crc kubenswrapper[4922]: E0126 14:11:16.092525 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.096015 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:42:03.156731483 +0000 UTC Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.119901 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.119970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.119984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.120008 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.120025 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.224311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.224372 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.224383 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.224402 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.224423 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.327939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.328007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.328025 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.328052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.328114 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.431919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.431965 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.431975 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.431993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.432004 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.535180 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.535270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.535294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.535331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.535354 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.642978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.643043 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.643061 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.643119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.643138 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.746169 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.746249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.746267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.746293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.746311 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.849106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.849160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.849177 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.849194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.849204 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.952809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.952889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.952907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.952932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:16 crc kubenswrapper[4922]: I0126 14:11:16.952951 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:16Z","lastTransitionTime":"2026-01-26T14:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.055931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.056011 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.056025 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.056047 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.056099 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.091644 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:17 crc kubenswrapper[4922]: E0126 14:11:17.091939 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.096260 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:16:28.695174429 +0000 UTC Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.159764 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.159851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.159869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.159893 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.159912 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.263491 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.263590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.263610 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.263636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.263656 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.367283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.367347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.367364 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.367392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.367413 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.471230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.471315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.471339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.471372 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.471394 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.574153 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.574217 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.574228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.574245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.574257 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.677797 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.677886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.677903 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.677926 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.677943 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.780468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.780523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.780537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.780557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.780569 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.808981 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.809052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.809088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.809112 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.809126 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:11:17Z","lastTransitionTime":"2026-01-26T14:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.883351 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p"] Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.884090 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.888764 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.888931 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.889175 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.894801 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.955491 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9zx7f" podStartSLOduration=75.955470033 podStartE2EDuration="1m15.955470033s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:17.954997209 +0000 UTC m=+95.157259981" watchObservedRunningTime="2026-01-26 14:11:17.955470033 +0000 UTC m=+95.157732806" Jan 26 14:11:17 crc kubenswrapper[4922]: I0126 14:11:17.982633 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.98261116 podStartE2EDuration="23.98261116s" podCreationTimestamp="2026-01-26 14:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:17.96638651 +0000 UTC m=+95.168649272" watchObservedRunningTime="2026-01-26 14:11:17.98261116 +0000 UTC m=+95.184873942" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.011273 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podStartSLOduration=76.011253053 podStartE2EDuration="1m16.011253053s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.010605683 +0000 UTC m=+95.212868455" watchObservedRunningTime="2026-01-26 14:11:18.011253053 +0000 UTC m=+95.213515825" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.011680 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tr7ks" podStartSLOduration=77.011674426 podStartE2EDuration="1m17.011674426s" podCreationTimestamp="2026-01-26 14:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:17.994841587 +0000 UTC m=+95.197104359" watchObservedRunningTime="2026-01-26 14:11:18.011674426 +0000 UTC m=+95.213937198" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.012235 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb9eee0-07ae-44ac-a0ca-449131f4530c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.012260 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb9eee0-07ae-44ac-a0ca-449131f4530c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.012338 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbb9eee0-07ae-44ac-a0ca-449131f4530c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.012440 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbb9eee0-07ae-44ac-a0ca-449131f4530c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.012465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbb9eee0-07ae-44ac-a0ca-449131f4530c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.052619 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.052595157 podStartE2EDuration="1m15.052595157s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.050907995 +0000 UTC m=+95.253170777" watchObservedRunningTime="2026-01-26 14:11:18.052595157 +0000 UTC m=+95.254857929" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.091753 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:18 crc kubenswrapper[4922]: E0126 14:11:18.091893 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.091974 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:18 crc kubenswrapper[4922]: E0126 14:11:18.092032 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.092100 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:18 crc kubenswrapper[4922]: E0126 14:11:18.092155 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.096442 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:51:38.803446007 +0000 UTC Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.096488 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.106930 4922 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.113535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb9eee0-07ae-44ac-a0ca-449131f4530c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.113574 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb9eee0-07ae-44ac-a0ca-449131f4530c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.113647 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbb9eee0-07ae-44ac-a0ca-449131f4530c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.115024 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbb9eee0-07ae-44ac-a0ca-449131f4530c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.120159 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbb9eee0-07ae-44ac-a0ca-449131f4530c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.120195 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbb9eee0-07ae-44ac-a0ca-449131f4530c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.120355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbb9eee0-07ae-44ac-a0ca-449131f4530c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.120579 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbb9eee0-07ae-44ac-a0ca-449131f4530c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.132370 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-52ctw" podStartSLOduration=75.132350925 podStartE2EDuration="1m15.132350925s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.090836496 +0000 UTC m=+95.293099278" watchObservedRunningTime="2026-01-26 14:11:18.132350925 +0000 UTC m=+95.334613687" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.136385 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb9eee0-07ae-44ac-a0ca-449131f4530c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.150085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbb9eee0-07ae-44ac-a0ca-449131f4530c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dz56p\" (UID: \"bbb9eee0-07ae-44ac-a0ca-449131f4530c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.184658 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.184630506 podStartE2EDuration="1m16.184630506s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.169625674 +0000 UTC m=+95.371888466" watchObservedRunningTime="2026-01-26 14:11:18.184630506 +0000 UTC m=+95.386893278" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.201194 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.201170606 podStartE2EDuration="48.201170606s" podCreationTimestamp="2026-01-26 14:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.186731351 +0000 UTC m=+95.388994143" watchObservedRunningTime="2026-01-26 14:11:18.201170606 +0000 UTC m=+95.403433388" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.219132 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.227377 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8w5kn" podStartSLOduration=76.227362803 podStartE2EDuration="1m16.227362803s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.226175407 +0000 UTC m=+95.428438179" watchObservedRunningTime="2026-01-26 14:11:18.227362803 +0000 UTC m=+95.429625575" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.243390 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfbd7" podStartSLOduration=75.243357066 podStartE2EDuration="1m15.243357066s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.241657373 +0000 UTC m=+95.443920145" watchObservedRunningTime="2026-01-26 14:11:18.243357066 +0000 UTC m=+95.445619838" Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.687472 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" event={"ID":"bbb9eee0-07ae-44ac-a0ca-449131f4530c","Type":"ContainerStarted","Data":"e31526c4f6eb2ea04de6ee61bb50d65b909c1b14ebe0e8f7b601fc30a7e3bfc9"} Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.687538 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" event={"ID":"bbb9eee0-07ae-44ac-a0ca-449131f4530c","Type":"ContainerStarted","Data":"3ba253585a00a8de6c161bc7741e29c8b86efa60846d262ca9d7200004fe9244"} Jan 26 14:11:18 crc kubenswrapper[4922]: I0126 14:11:18.711302 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.711277447 podStartE2EDuration="1m11.711277447s" podCreationTimestamp="2026-01-26 14:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.272421691 +0000 UTC m=+95.474684463" watchObservedRunningTime="2026-01-26 14:11:18.711277447 +0000 UTC m=+95.913540219" Jan 26 14:11:19 crc kubenswrapper[4922]: I0126 14:11:19.092469 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:19 crc kubenswrapper[4922]: E0126 14:11:19.092669 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:20 crc kubenswrapper[4922]: I0126 14:11:20.091821 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:20 crc kubenswrapper[4922]: E0126 14:11:20.092141 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:20 crc kubenswrapper[4922]: I0126 14:11:20.092606 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:20 crc kubenswrapper[4922]: E0126 14:11:20.092735 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:20 crc kubenswrapper[4922]: I0126 14:11:20.093094 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:20 crc kubenswrapper[4922]: E0126 14:11:20.093220 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:21 crc kubenswrapper[4922]: I0126 14:11:21.091702 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:21 crc kubenswrapper[4922]: E0126 14:11:21.091895 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:21 crc kubenswrapper[4922]: I0126 14:11:21.261715 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:21 crc kubenswrapper[4922]: E0126 14:11:21.262013 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:11:21 crc kubenswrapper[4922]: E0126 14:11:21.262170 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs podName:756187f6-68ea-4408-8d07-f691e16b4484 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:25.262142341 +0000 UTC m=+162.464405123 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs") pod "network-metrics-daemon-pzxnt" (UID: "756187f6-68ea-4408-8d07-f691e16b4484") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:11:22 crc kubenswrapper[4922]: I0126 14:11:22.091491 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:22 crc kubenswrapper[4922]: I0126 14:11:22.091563 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:22 crc kubenswrapper[4922]: I0126 14:11:22.091531 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:22 crc kubenswrapper[4922]: E0126 14:11:22.091725 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:22 crc kubenswrapper[4922]: E0126 14:11:22.091901 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:22 crc kubenswrapper[4922]: E0126 14:11:22.092024 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:23 crc kubenswrapper[4922]: I0126 14:11:23.091636 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:23 crc kubenswrapper[4922]: E0126 14:11:23.094027 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:24 crc kubenswrapper[4922]: I0126 14:11:24.091726 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:24 crc kubenswrapper[4922]: I0126 14:11:24.091870 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:24 crc kubenswrapper[4922]: I0126 14:11:24.091871 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:24 crc kubenswrapper[4922]: E0126 14:11:24.092547 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:24 crc kubenswrapper[4922]: E0126 14:11:24.092658 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:24 crc kubenswrapper[4922]: E0126 14:11:24.092744 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:25 crc kubenswrapper[4922]: I0126 14:11:25.091956 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:25 crc kubenswrapper[4922]: E0126 14:11:25.092233 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:26 crc kubenswrapper[4922]: I0126 14:11:26.092331 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:26 crc kubenswrapper[4922]: E0126 14:11:26.092461 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:26 crc kubenswrapper[4922]: I0126 14:11:26.092332 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:26 crc kubenswrapper[4922]: I0126 14:11:26.092332 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:26 crc kubenswrapper[4922]: E0126 14:11:26.092556 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:26 crc kubenswrapper[4922]: I0126 14:11:26.092636 4922 scope.go:117] "RemoveContainer" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" Jan 26 14:11:26 crc kubenswrapper[4922]: E0126 14:11:26.092791 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:26 crc kubenswrapper[4922]: E0126 14:11:26.092809 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:11:27 crc kubenswrapper[4922]: I0126 14:11:27.091936 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:27 crc kubenswrapper[4922]: E0126 14:11:27.092276 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:28 crc kubenswrapper[4922]: I0126 14:11:28.092202 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:28 crc kubenswrapper[4922]: I0126 14:11:28.092265 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:28 crc kubenswrapper[4922]: E0126 14:11:28.092572 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:28 crc kubenswrapper[4922]: E0126 14:11:28.092757 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:28 crc kubenswrapper[4922]: I0126 14:11:28.093429 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:28 crc kubenswrapper[4922]: E0126 14:11:28.093655 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:29 crc kubenswrapper[4922]: I0126 14:11:29.091734 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:29 crc kubenswrapper[4922]: E0126 14:11:29.091998 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:30 crc kubenswrapper[4922]: I0126 14:11:30.091639 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:30 crc kubenswrapper[4922]: I0126 14:11:30.091727 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:30 crc kubenswrapper[4922]: I0126 14:11:30.091792 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:30 crc kubenswrapper[4922]: E0126 14:11:30.092448 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:30 crc kubenswrapper[4922]: E0126 14:11:30.092637 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:30 crc kubenswrapper[4922]: E0126 14:11:30.092810 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:31 crc kubenswrapper[4922]: I0126 14:11:31.091861 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:31 crc kubenswrapper[4922]: E0126 14:11:31.092103 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:32 crc kubenswrapper[4922]: I0126 14:11:32.092317 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:32 crc kubenswrapper[4922]: I0126 14:11:32.092402 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:32 crc kubenswrapper[4922]: I0126 14:11:32.092754 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:32 crc kubenswrapper[4922]: E0126 14:11:32.092970 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:32 crc kubenswrapper[4922]: E0126 14:11:32.093148 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:32 crc kubenswrapper[4922]: E0126 14:11:32.093245 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:33 crc kubenswrapper[4922]: I0126 14:11:33.092579 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:33 crc kubenswrapper[4922]: E0126 14:11:33.096504 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:34 crc kubenswrapper[4922]: I0126 14:11:34.091805 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:34 crc kubenswrapper[4922]: I0126 14:11:34.091849 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:34 crc kubenswrapper[4922]: I0126 14:11:34.091866 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:34 crc kubenswrapper[4922]: E0126 14:11:34.091997 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:34 crc kubenswrapper[4922]: E0126 14:11:34.092105 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:34 crc kubenswrapper[4922]: E0126 14:11:34.092259 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.092046 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:35 crc kubenswrapper[4922]: E0126 14:11:35.092355 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.757487 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/1.log" Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.758361 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/0.log" Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.758436 4922 generic.go:334] "Generic (PLEG): container finished" podID="103e8f62-57c7-4d49-b740-16d357710e61" containerID="092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172" exitCode=1 Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.758484 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerDied","Data":"092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172"} Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.758538 4922 scope.go:117] "RemoveContainer" containerID="92da2e8b33e9cbd347226755783ec8d59a4132aeb61dae003138956f86051197" Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.759303 4922 scope.go:117] "RemoveContainer" containerID="092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172" Jan 26 14:11:35 crc kubenswrapper[4922]: E0126 14:11:35.759647 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9zx7f_openshift-multus(103e8f62-57c7-4d49-b740-16d357710e61)\"" pod="openshift-multus/multus-9zx7f" podUID="103e8f62-57c7-4d49-b740-16d357710e61" Jan 26 14:11:35 crc kubenswrapper[4922]: I0126 14:11:35.784703 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dz56p" podStartSLOduration=92.784673476 podStartE2EDuration="1m32.784673476s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:18.712428022 +0000 UTC m=+95.914690804" watchObservedRunningTime="2026-01-26 14:11:35.784673476 +0000 UTC m=+112.986936288" Jan 26 14:11:36 crc kubenswrapper[4922]: I0126 14:11:36.092435 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:36 crc kubenswrapper[4922]: I0126 14:11:36.092435 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:36 crc kubenswrapper[4922]: I0126 14:11:36.093399 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:36 crc kubenswrapper[4922]: E0126 14:11:36.093507 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:36 crc kubenswrapper[4922]: E0126 14:11:36.093620 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:36 crc kubenswrapper[4922]: E0126 14:11:36.093747 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:36 crc kubenswrapper[4922]: I0126 14:11:36.763017 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/1.log" Jan 26 14:11:37 crc kubenswrapper[4922]: I0126 14:11:37.091924 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:37 crc kubenswrapper[4922]: E0126 14:11:37.092195 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:38 crc kubenswrapper[4922]: I0126 14:11:38.092322 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:38 crc kubenswrapper[4922]: I0126 14:11:38.092351 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:38 crc kubenswrapper[4922]: I0126 14:11:38.092407 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:38 crc kubenswrapper[4922]: E0126 14:11:38.093303 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:38 crc kubenswrapper[4922]: E0126 14:11:38.093027 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:38 crc kubenswrapper[4922]: E0126 14:11:38.093467 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:38 crc kubenswrapper[4922]: I0126 14:11:38.093827 4922 scope.go:117] "RemoveContainer" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" Jan 26 14:11:38 crc kubenswrapper[4922]: E0126 14:11:38.094191 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5m7p9_openshift-ovn-kubernetes(ec4defeb-f2b0-4291-9147-b37e5c43da57)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" Jan 26 14:11:39 crc kubenswrapper[4922]: I0126 14:11:39.091964 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:39 crc kubenswrapper[4922]: E0126 14:11:39.092193 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:40 crc kubenswrapper[4922]: I0126 14:11:40.092037 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:40 crc kubenswrapper[4922]: I0126 14:11:40.092054 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:40 crc kubenswrapper[4922]: E0126 14:11:40.092247 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:40 crc kubenswrapper[4922]: E0126 14:11:40.092520 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:40 crc kubenswrapper[4922]: I0126 14:11:40.093397 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:40 crc kubenswrapper[4922]: E0126 14:11:40.093748 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:41 crc kubenswrapper[4922]: I0126 14:11:41.092752 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:41 crc kubenswrapper[4922]: E0126 14:11:41.092999 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:42 crc kubenswrapper[4922]: I0126 14:11:42.091845 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:42 crc kubenswrapper[4922]: E0126 14:11:42.092103 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:42 crc kubenswrapper[4922]: I0126 14:11:42.092379 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:42 crc kubenswrapper[4922]: I0126 14:11:42.092437 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:42 crc kubenswrapper[4922]: E0126 14:11:42.092575 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:42 crc kubenswrapper[4922]: E0126 14:11:42.092734 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:43 crc kubenswrapper[4922]: I0126 14:11:43.091593 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:43 crc kubenswrapper[4922]: E0126 14:11:43.093566 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:43 crc kubenswrapper[4922]: E0126 14:11:43.103917 4922 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 14:11:43 crc kubenswrapper[4922]: E0126 14:11:43.225405 4922 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 14:11:44 crc kubenswrapper[4922]: I0126 14:11:44.092450 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:44 crc kubenswrapper[4922]: I0126 14:11:44.092586 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:44 crc kubenswrapper[4922]: I0126 14:11:44.092609 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:44 crc kubenswrapper[4922]: E0126 14:11:44.092707 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:44 crc kubenswrapper[4922]: E0126 14:11:44.092948 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:44 crc kubenswrapper[4922]: E0126 14:11:44.093327 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:45 crc kubenswrapper[4922]: I0126 14:11:45.092356 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:45 crc kubenswrapper[4922]: E0126 14:11:45.092548 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:46 crc kubenswrapper[4922]: I0126 14:11:46.092525 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:46 crc kubenswrapper[4922]: I0126 14:11:46.092558 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:46 crc kubenswrapper[4922]: E0126 14:11:46.092764 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:46 crc kubenswrapper[4922]: E0126 14:11:46.092845 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:46 crc kubenswrapper[4922]: I0126 14:11:46.093312 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:46 crc kubenswrapper[4922]: E0126 14:11:46.093668 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:47 crc kubenswrapper[4922]: I0126 14:11:47.091520 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:47 crc kubenswrapper[4922]: E0126 14:11:47.091668 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:48 crc kubenswrapper[4922]: I0126 14:11:48.092307 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:48 crc kubenswrapper[4922]: I0126 14:11:48.092396 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:48 crc kubenswrapper[4922]: E0126 14:11:48.092538 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:48 crc kubenswrapper[4922]: I0126 14:11:48.092572 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:48 crc kubenswrapper[4922]: E0126 14:11:48.092674 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:48 crc kubenswrapper[4922]: E0126 14:11:48.092750 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:48 crc kubenswrapper[4922]: E0126 14:11:48.226763 4922 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 14:11:49 crc kubenswrapper[4922]: I0126 14:11:49.091716 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:49 crc kubenswrapper[4922]: E0126 14:11:49.092310 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.092240 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:50 crc kubenswrapper[4922]: E0126 14:11:50.092807 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.093201 4922 scope.go:117] "RemoveContainer" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.092315 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.092269 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:50 crc kubenswrapper[4922]: E0126 14:11:50.093472 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:50 crc kubenswrapper[4922]: E0126 14:11:50.093650 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.818225 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/3.log" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.821459 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerStarted","Data":"8fb06ac2e79a533c1628fc31291df9c8a1c2ac28c39bc347082a4d3fa718ba74"} Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.822326 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.853509 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podStartSLOduration=107.853487018 podStartE2EDuration="1m47.853487018s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:11:50.853128796 +0000 UTC m=+128.055391598" watchObservedRunningTime="2026-01-26 14:11:50.853487018 +0000 UTC m=+128.055749790" Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.942246 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pzxnt"] Jan 26 14:11:50 crc kubenswrapper[4922]: I0126 14:11:50.942443 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:50 crc kubenswrapper[4922]: E0126 14:11:50.942598 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:51 crc kubenswrapper[4922]: I0126 14:11:51.092646 4922 scope.go:117] "RemoveContainer" containerID="092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172" Jan 26 14:11:51 crc kubenswrapper[4922]: I0126 14:11:51.828394 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/1.log" Jan 26 14:11:51 crc kubenswrapper[4922]: I0126 14:11:51.828501 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerStarted","Data":"8af1882e572fae107a17e68afc2597eb00a381ab59c787a07cad8c9e8356abeb"} Jan 26 14:11:52 crc kubenswrapper[4922]: I0126 14:11:52.092295 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:52 crc kubenswrapper[4922]: I0126 14:11:52.092304 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:52 crc kubenswrapper[4922]: I0126 14:11:52.092476 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:52 crc kubenswrapper[4922]: E0126 14:11:52.092515 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:11:52 crc kubenswrapper[4922]: E0126 14:11:52.092589 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:11:52 crc kubenswrapper[4922]: E0126 14:11:52.092765 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:11:53 crc kubenswrapper[4922]: I0126 14:11:53.091701 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:53 crc kubenswrapper[4922]: E0126 14:11:53.092825 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pzxnt" podUID="756187f6-68ea-4408-8d07-f691e16b4484" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.091903 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.092023 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.092315 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.094681 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.094972 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.095848 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 14:11:54 crc kubenswrapper[4922]: I0126 14:11:54.096087 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 14:11:55 crc kubenswrapper[4922]: I0126 14:11:55.092270 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:11:55 crc kubenswrapper[4922]: I0126 14:11:55.096629 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 14:11:55 crc kubenswrapper[4922]: I0126 14:11:55.097467 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.446516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.493774 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.503055 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.508817 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dst2r"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.509836 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.511985 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.512213 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.512366 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.512428 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.512557 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.512867 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.517784 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.517783 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.518961 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.520114 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.520149 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.533381 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.536752 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9tg4w"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.537808 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-p2r7g"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.538254 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.539002 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.539267 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.539330 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.547093 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r8trh"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.547950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559183 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559353 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559543 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559672 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559732 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559822 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.559964 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560015 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560095 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560143 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560185 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560233 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560249 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560318 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560359 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560403 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560442 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560481 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560504 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560630 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.560784 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.561093 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.562379 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.563246 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.563834 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jhchn"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.564544 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.564989 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.565157 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.565153 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5gw2z"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.565521 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.565818 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kzfdk"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.566238 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.566636 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.567301 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.567532 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.569922 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rxv7b"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.571134 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s5rq6"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.571325 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.571972 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.571982 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-75z9j"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.572570 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.575395 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.575579 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.576976 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.586751 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.597165 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.598902 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jsdpn"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.602141 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.609018 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.640211 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.641564 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.642273 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.642663 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.642997 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.645358 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fd75n"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.645878 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49958f99-8b05-4ebb-9eb6-396020c374eb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.645972 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646518 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.645964 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-bound-sa-token\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646709 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-etcd-client\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646736 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-tls\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646757 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-serving-cert\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbcfbc26-d582-4311-af16-744933a25f5e-audit-dir\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646809 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92zpk\" (UniqueName: \"kubernetes.io/projected/fbcfbc26-d582-4311-af16-744933a25f5e-kube-api-access-92zpk\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646841 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646862 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646886 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646903 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp55n\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-kube-api-access-wp55n\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646938 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-encryption-config\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646956 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-trusted-ca\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.646978 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-certificates\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647006 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-audit-policies\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647023 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49958f99-8b05-4ebb-9eb6-396020c374eb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647185 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647256 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647262 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647578 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647597 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.647832 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.648652 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.148635338 +0000 UTC m=+136.350898110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.651344 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.651494 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.651698 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.651715 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.651940 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.652881 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k55sj"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.653396 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.653677 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.653916 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.654055 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.652888 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.652967 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.653012 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.653090 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.653152 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.655841 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.656011 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.656162 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.656283 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.656481 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.656586 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.656808 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.657154 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.657274 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.657467 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.657567 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.657743 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.658224 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.658444 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.658488 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.658603 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.658837 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.658971 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659119 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659259 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659395 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659539 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659684 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659800 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.659905 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.660111 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.660656 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.661887 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.662203 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.662444 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.662738 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.663057 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.663258 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.663495 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.663643 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.665025 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.665175 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.666500 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.666739 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.667011 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.667243 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.667352 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.667551 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26rnv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.667931 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.668080 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.668602 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zkl86"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.668905 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.669048 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.669079 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.669588 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.670131 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.670545 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.670710 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.670800 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.670880 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.670894 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.671276 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.671577 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.671633 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.673241 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.674238 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5lfvx"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.676059 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.682224 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.709189 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.709851 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.712507 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.715641 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.715718 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.716399 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.716807 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.718581 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.721123 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.726744 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z7f85"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.727564 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.727634 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.730457 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.735705 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.736101 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.738964 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.739441 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.740949 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dst2r"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.740983 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.741043 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.746933 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.747277 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.747508 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.247478054 +0000 UTC m=+136.449740826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.747661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.747793 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-config\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.747991 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.748125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-dir\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.748392 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.748675 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmlwm\" (UniqueName: \"kubernetes.io/projected/593cb82e-ba04-47e6-b240-308c22a14457-kube-api-access-kmlwm\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.748792 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749007 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ghs\" (UniqueName: \"kubernetes.io/projected/8fb60f34-dde7-46b1-a726-b1d796ee0d34-kube-api-access-q6ghs\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749168 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-config\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.748480 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9tg4w"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749418 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-config\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/789d53a9-3224-4166-83b8-b242de99a397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749516 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-certificates\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749547 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-stats-auth\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749578 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f5f606-f217-4189-8d05-f97fe5aeabc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749607 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc7443-38b3-490b-bb56-4fade81f4779-config\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749632 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-audit-dir\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749659 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxldq\" (UniqueName: \"kubernetes.io/projected/cc1ae0ea-ace8-40a5-bc24-fc1363e898c1-kube-api-access-rxldq\") pod \"migrator-59844c95c7-jprp4\" (UID: \"cc1ae0ea-ace8-40a5-bc24-fc1363e898c1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749685 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz96q\" (UniqueName: \"kubernetes.io/projected/789d53a9-3224-4166-83b8-b242de99a397-kube-api-access-kz96q\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749714 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-policies\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749742 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6fb8\" (UniqueName: \"kubernetes.io/projected/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-kube-api-access-q6fb8\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749814 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmms9\" (UniqueName: \"kubernetes.io/projected/046e5172-4319-4334-b411-b7308f761ed0-kube-api-access-vmms9\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749840 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/978b98ab-c044-4683-aabd-5d2b4d3cab70-apiservice-cert\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749878 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/789d53a9-3224-4166-83b8-b242de99a397-srv-cert\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749910 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/317b3c90-5ae1-4952-b54d-92606226bd5c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749941 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2lmx\" (UniqueName: \"kubernetes.io/projected/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-kube-api-access-t2lmx\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebc7443-38b3-490b-bb56-4fade81f4779-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.749990 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-node-pullsecrets\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750011 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmtm\" (UniqueName: \"kubernetes.io/projected/ded42282-8aa9-4480-923f-87fa83ed5e7e-kube-api-access-7wmtm\") pod \"downloads-7954f5f757-rxv7b\" (UID: \"ded42282-8aa9-4480-923f-87fa83ed5e7e\") " pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750034 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-images\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750097 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s95n9\" (UniqueName: \"kubernetes.io/projected/661b43ba-1b61-4984-af89-7e204e7074a8-kube-api-access-s95n9\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750126 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750155 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncq9h\" (UniqueName: \"kubernetes.io/projected/44941c5b-19db-4444-81e0-5aa978534263-kube-api-access-ncq9h\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750181 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvtbh\" (UniqueName: \"kubernetes.io/projected/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-kube-api-access-rvtbh\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750205 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/593cb82e-ba04-47e6-b240-308c22a14457-serving-cert\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750233 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/593cb82e-ba04-47e6-b240-308c22a14457-trusted-ca\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750261 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/154530fd-6283-436e-8fa7-c7b891904e4d-metrics-tls\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750285 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-service-ca\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750317 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv28t\" (UniqueName: \"kubernetes.io/projected/12e31154-e0cc-4aa6-802b-31590a683866-kube-api-access-bv28t\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750373 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-client-ca\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750408 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750440 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9w4\" (UniqueName: \"kubernetes.io/projected/68e6d11a-9d45-42a9-a366-ee3485704024-kube-api-access-ts9w4\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750472 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-serving-cert\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750501 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8fb60f34-dde7-46b1-a726-b1d796ee0d34-images\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750529 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-metrics-certs\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750555 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcscq\" (UniqueName: \"kubernetes.io/projected/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-kube-api-access-kcscq\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750575 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-certificates\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750583 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-client-ca\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750627 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750655 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-tls\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750693 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4f5f606-f217-4189-8d05-f97fe5aeabc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750851 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-kube-api-access-cnqcq\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750880 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-serving-cert\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750899 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750919 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-ca\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750937 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebc7443-38b3-490b-bb56-4fade81f4779-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750981 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r8trh"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.750960 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752487 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbcfbc26-d582-4311-af16-744933a25f5e-audit-dir\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752515 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8wq\" (UniqueName: \"kubernetes.io/projected/6761c679-5216-4857-adda-6f032dffb3ad-kube-api-access-mb8wq\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752533 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/593cb82e-ba04-47e6-b240-308c22a14457-config\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752549 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317b3c90-5ae1-4952-b54d-92606226bd5c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752565 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7398d291-531c-4828-b93d-81e238b3b6db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vq57m\" (UID: \"7398d291-531c-4828-b93d-81e238b3b6db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752587 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcx85\" (UniqueName: \"kubernetes.io/projected/7398d291-531c-4828-b93d-81e238b3b6db-kube-api-access-dcx85\") pod \"package-server-manager-789f6589d5-vq57m\" (UID: \"7398d291-531c-4828-b93d-81e238b3b6db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752615 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a138ebf-196c-4efa-aa3b-5c563c8209e5-serving-cert\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752686 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbcfbc26-d582-4311-af16-744933a25f5e-audit-dir\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752711 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b43ba-1b61-4984-af89-7e204e7074a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752813 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-machine-approver-tls\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752847 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/154530fd-6283-436e-8fa7-c7b891904e4d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752905 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752939 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752963 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-oauth-config\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.752987 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fb60f34-dde7-46b1-a726-b1d796ee0d34-auth-proxy-config\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753012 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-service-ca\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753030 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-image-import-ca\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/978b98ab-c044-4683-aabd-5d2b4d3cab70-kube-api-access-ffjll\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753104 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-trusted-ca-bundle\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753127 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-config\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753153 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp55n\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-kube-api-access-wp55n\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753175 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/046e5172-4319-4334-b411-b7308f761ed0-profile-collector-cert\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753227 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-oauth-serving-cert\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753229 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vglvd"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753248 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-etcd-serving-ca\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-auth-proxy-config\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753596 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/232b5ebc-174d-4d6e-b172-854c92006872-signing-key\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753611 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-config\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753642 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-trusted-ca\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753658 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-encryption-config\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753678 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44941c5b-19db-4444-81e0-5aa978534263-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753703 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsnt\" (UniqueName: \"kubernetes.io/projected/3c992b2a-afc8-4b71-ae58-bb6748930ac9-kube-api-access-pcsnt\") pod \"multus-admission-controller-857f4d67dd-kzfdk\" (UID: \"3c992b2a-afc8-4b71-ae58-bb6748930ac9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753726 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/978b98ab-c044-4683-aabd-5d2b4d3cab70-webhook-cert\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.753761 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.25372969 +0000 UTC m=+136.455992642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753808 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab-metrics-tls\") pod \"dns-operator-744455d44c-jhchn\" (UID: \"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753852 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753930 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-audit-policies\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.753968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9p7\" (UniqueName: \"kubernetes.io/projected/5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60-kube-api-access-5b9p7\") pod \"cluster-samples-operator-665b6dd947-fdzqj\" (UID: \"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754007 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49958f99-8b05-4ebb-9eb6-396020c374eb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754051 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fb60f34-dde7-46b1-a726-b1d796ee0d34-proxy-tls\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754110 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6761c679-5216-4857-adda-6f032dffb3ad-serving-cert\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754156 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzs6x\" (UniqueName: \"kubernetes.io/projected/a4f5f606-f217-4189-8d05-f97fe5aeabc2-kube-api-access-hzs6x\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-encryption-config\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754238 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b43ba-1b61-4984-af89-7e204e7074a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754270 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-etcd-client\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754310 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49958f99-8b05-4ebb-9eb6-396020c374eb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754357 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-bound-sa-token\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754400 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-etcd-client\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754436 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44941c5b-19db-4444-81e0-5aa978534263-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754476 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754505 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-config\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754538 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bab675e-e24a-43aa-abdd-0e657671535d-serving-cert\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754565 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fdzqj\" (UID: \"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754598 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/046e5172-4319-4334-b411-b7308f761ed0-srv-cert\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754631 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-default-certificate\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754664 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvznv\" (UniqueName: \"kubernetes.io/projected/2a138ebf-196c-4efa-aa3b-5c563c8209e5-kube-api-access-wvznv\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754693 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/978b98ab-c044-4683-aabd-5d2b4d3cab70-tmpfs\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754770 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754799 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/232b5ebc-174d-4d6e-b172-854c92006872-signing-cabundle\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754841 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqff\" (UniqueName: \"kubernetes.io/projected/232b5ebc-174d-4d6e-b172-854c92006872-kube-api-access-vwqff\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754874 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqjvt\" (UniqueName: \"kubernetes.io/projected/154530fd-6283-436e-8fa7-c7b891904e4d-kube-api-access-fqjvt\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754901 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754916 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754949 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-audit\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.754981 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92zpk\" (UniqueName: \"kubernetes.io/projected/fbcfbc26-d582-4311-af16-744933a25f5e-kube-api-access-92zpk\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755017 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzpf\" (UniqueName: \"kubernetes.io/projected/fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab-kube-api-access-fzzpf\") pod \"dns-operator-744455d44c-jhchn\" (UID: \"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755045 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755097 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5785\" (UniqueName: \"kubernetes.io/projected/7bab675e-e24a-43aa-abdd-0e657671535d-kube-api-access-q5785\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-trusted-ca\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755135 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-serving-cert\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755206 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwc7z\" (UniqueName: \"kubernetes.io/projected/317b3c90-5ae1-4952-b54d-92606226bd5c-kube-api-access-rwc7z\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755229 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3c992b2a-afc8-4b71-ae58-bb6748930ac9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kzfdk\" (UID: \"3c992b2a-afc8-4b71-ae58-bb6748930ac9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755251 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-serving-cert\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755275 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p62r\" (UniqueName: \"kubernetes.io/projected/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-kube-api-access-9p62r\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755334 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755360 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68e6d11a-9d45-42a9-a366-ee3485704024-service-ca-bundle\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755388 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-client\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755414 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.755984 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756218 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49958f99-8b05-4ebb-9eb6-396020c374eb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756238 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-audit-policies\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756589 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbcfbc26-d582-4311-af16-744933a25f5e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756641 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-config\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756676 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-config\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756704 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/154530fd-6283-436e-8fa7-c7b891904e4d-trusted-ca\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756801 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.756825 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s5rq6"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.757627 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jhchn"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.758505 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rxv7b"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.758609 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-encryption-config\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.759141 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-tls\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.759480 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-serving-cert\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.759690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbcfbc26-d582-4311-af16-744933a25f5e-etcd-client\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.761089 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49958f99-8b05-4ebb-9eb6-396020c374eb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.761242 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5gw2z"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.762388 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.763421 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.764455 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kzfdk"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.766124 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.767952 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.769490 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jsdpn"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.770929 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.772142 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.772525 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.773527 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z7f85"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.774979 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vzxs5"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.777400 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.778092 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.779035 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qrlv6"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.779958 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.780369 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.781964 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.783134 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.784317 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.785828 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.786041 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k55sj"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.788991 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.790649 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-75z9j"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.791954 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.793155 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zkl86"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.796282 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fd75n"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.798071 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26rnv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.799275 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.801256 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.802603 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.804737 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vzxs5"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.806314 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.807247 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.808106 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vglvd"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.810298 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5lfvx"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.811526 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qrlv6"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.812718 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kchd7"] Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.815288 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.827713 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.846678 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857276 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.857444 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.357401357 +0000 UTC m=+136.559664129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857514 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3529a429-628d-4c73-aaad-ee3719ea2022-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857552 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fb60f34-dde7-46b1-a726-b1d796ee0d34-proxy-tls\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857586 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzs6x\" (UniqueName: \"kubernetes.io/projected/a4f5f606-f217-4189-8d05-f97fe5aeabc2-kube-api-access-hzs6x\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857622 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-encryption-config\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6761c679-5216-4857-adda-6f032dffb3ad-serving-cert\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857669 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b43ba-1b61-4984-af89-7e204e7074a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857689 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-etcd-client\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857711 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9829fed-e403-4cc4-b9be-88ca53421931-config-volume\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857741 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44941c5b-19db-4444-81e0-5aa978534263-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857759 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d82a6ce2-925e-45e8-8286-932ecb869755-serving-cert\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857779 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-config\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bab675e-e24a-43aa-abdd-0e657671535d-serving-cert\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857836 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/046e5172-4319-4334-b411-b7308f761ed0-srv-cert\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fdzqj\" (UID: \"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857872 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-default-certificate\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvznv\" (UniqueName: \"kubernetes.io/projected/2a138ebf-196c-4efa-aa3b-5c563c8209e5-kube-api-access-wvznv\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857907 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/978b98ab-c044-4683-aabd-5d2b4d3cab70-tmpfs\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857928 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857952 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857973 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/232b5ebc-174d-4d6e-b172-854c92006872-signing-cabundle\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.857992 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqff\" (UniqueName: \"kubernetes.io/projected/232b5ebc-174d-4d6e-b172-854c92006872-kube-api-access-vwqff\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858009 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqjvt\" (UniqueName: \"kubernetes.io/projected/154530fd-6283-436e-8fa7-c7b891904e4d-kube-api-access-fqjvt\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858031 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-plugins-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858083 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858105 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538a74fd-fc9a-49f8-83cc-c33a83d15081-secret-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858124 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-registration-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858178 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-audit\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858199 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzpf\" (UniqueName: \"kubernetes.io/projected/fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab-kube-api-access-fzzpf\") pod \"dns-operator-744455d44c-jhchn\" (UID: \"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858219 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5785\" (UniqueName: \"kubernetes.io/projected/7bab675e-e24a-43aa-abdd-0e657671535d-kube-api-access-q5785\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858238 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-serving-cert\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858283 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwc7z\" (UniqueName: \"kubernetes.io/projected/317b3c90-5ae1-4952-b54d-92606226bd5c-kube-api-access-rwc7z\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858305 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3c992b2a-afc8-4b71-ae58-bb6748930ac9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kzfdk\" (UID: \"3c992b2a-afc8-4b71-ae58-bb6748930ac9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858326 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-serving-cert\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858346 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p62r\" (UniqueName: \"kubernetes.io/projected/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-kube-api-access-9p62r\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858377 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef694d4b-2d06-4c04-924e-698540e3692a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858396 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-socket-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858419 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68e6d11a-9d45-42a9-a366-ee3485704024-service-ca-bundle\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858442 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-client\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858463 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-config\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858482 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-config\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858501 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/154530fd-6283-436e-8fa7-c7b891904e4d-trusted-ca\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858522 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858541 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-config\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858561 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858582 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858602 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26a6323f-5055-4036-bc4e-0e1d9239ee73-cert\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858621 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d69891-912a-4536-98d9-55ebf41a3ae1-config\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858641 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858660 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-dir\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858680 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858703 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6ghs\" (UniqueName: \"kubernetes.io/projected/8fb60f34-dde7-46b1-a726-b1d796ee0d34-kube-api-access-q6ghs\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gg96\" (UniqueName: \"kubernetes.io/projected/26a6323f-5055-4036-bc4e-0e1d9239ee73-kube-api-access-2gg96\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858785 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljcp4\" (UniqueName: \"kubernetes.io/projected/3529a429-628d-4c73-aaad-ee3719ea2022-kube-api-access-ljcp4\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmlwm\" (UniqueName: \"kubernetes.io/projected/593cb82e-ba04-47e6-b240-308c22a14457-kube-api-access-kmlwm\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858840 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-config\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858864 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858884 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-config\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858903 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/789d53a9-3224-4166-83b8-b242de99a397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858924 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef694d4b-2d06-4c04-924e-698540e3692a-proxy-tls\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-stats-auth\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858965 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f5f606-f217-4189-8d05-f97fe5aeabc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.858989 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc7443-38b3-490b-bb56-4fade81f4779-config\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859009 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-audit-dir\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859029 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz96q\" (UniqueName: \"kubernetes.io/projected/789d53a9-3224-4166-83b8-b242de99a397-kube-api-access-kz96q\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859047 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2l9b\" (UniqueName: \"kubernetes.io/projected/d82a6ce2-925e-45e8-8286-932ecb869755-kube-api-access-p2l9b\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859095 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-policies\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859116 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxldq\" (UniqueName: \"kubernetes.io/projected/cc1ae0ea-ace8-40a5-bc24-fc1363e898c1-kube-api-access-rxldq\") pod \"migrator-59844c95c7-jprp4\" (UID: \"cc1ae0ea-ace8-40a5-bc24-fc1363e898c1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmms9\" (UniqueName: \"kubernetes.io/projected/046e5172-4319-4334-b411-b7308f761ed0-kube-api-access-vmms9\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859156 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/978b98ab-c044-4683-aabd-5d2b4d3cab70-apiservice-cert\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859175 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/789d53a9-3224-4166-83b8-b242de99a397-srv-cert\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859194 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d69891-912a-4536-98d9-55ebf41a3ae1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6fb8\" (UniqueName: \"kubernetes.io/projected/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-kube-api-access-q6fb8\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859235 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2lmx\" (UniqueName: \"kubernetes.io/projected/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-kube-api-access-t2lmx\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859256 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebc7443-38b3-490b-bb56-4fade81f4779-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cfa152-ddce-4531-9d48-07f06824d74b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859301 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/317b3c90-5ae1-4952-b54d-92606226bd5c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.859323 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wmtm\" (UniqueName: \"kubernetes.io/projected/ded42282-8aa9-4480-923f-87fa83ed5e7e-kube-api-access-7wmtm\") pod \"downloads-7954f5f757-rxv7b\" (UID: \"ded42282-8aa9-4480-923f-87fa83ed5e7e\") " pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.860042 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-images\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.860751 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68e6d11a-9d45-42a9-a366-ee3485704024-service-ca-bundle\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.861480 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-config\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.861591 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/978b98ab-c044-4683-aabd-5d2b4d3cab70-tmpfs\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.861927 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-audit-dir\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.862802 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-config\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863297 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-csi-data-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863338 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-node-pullsecrets\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863391 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s95n9\" (UniqueName: \"kubernetes.io/projected/661b43ba-1b61-4984-af89-7e204e7074a8-kube-api-access-s95n9\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863419 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863445 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncq9h\" (UniqueName: \"kubernetes.io/projected/44941c5b-19db-4444-81e0-5aa978534263-kube-api-access-ncq9h\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863588 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8442e2a1-6734-46b7-a163-701037959001-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863682 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-dir\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-node-pullsecrets\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.863923 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-config\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.864372 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fb60f34-dde7-46b1-a726-b1d796ee0d34-proxy-tls\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.864494 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-images\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.865011 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.865511 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-etcd-client\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.865965 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/232b5ebc-174d-4d6e-b172-854c92006872-signing-cabundle\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.865863 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-default-certificate\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.865579 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/154530fd-6283-436e-8fa7-c7b891904e4d-trusted-ca\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.867482 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-stats-auth\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.867773 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-audit\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.867866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-serving-cert\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868009 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868421 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f5f606-f217-4189-8d05-f97fe5aeabc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868733 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6898\" (UniqueName: \"kubernetes.io/projected/8442e2a1-6734-46b7-a163-701037959001-kube-api-access-z6898\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868792 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvtbh\" (UniqueName: \"kubernetes.io/projected/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-kube-api-access-rvtbh\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868857 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/593cb82e-ba04-47e6-b240-308c22a14457-trusted-ca\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868886 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-encryption-config\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868886 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/154530fd-6283-436e-8fa7-c7b891904e4d-metrics-tls\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.868950 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-service-ca\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869002 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/593cb82e-ba04-47e6-b240-308c22a14457-serving-cert\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869029 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-client-ca\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869128 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/317b3c90-5ae1-4952-b54d-92606226bd5c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869132 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3c992b2a-afc8-4b71-ae58-bb6748930ac9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kzfdk\" (UID: \"3c992b2a-afc8-4b71-ae58-bb6748930ac9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869411 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-serving-cert\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869794 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-service-ca\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869800 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv28t\" (UniqueName: \"kubernetes.io/projected/12e31154-e0cc-4aa6-802b-31590a683866-kube-api-access-bv28t\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869870 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9w4\" (UniqueName: \"kubernetes.io/projected/68e6d11a-9d45-42a9-a366-ee3485704024-kube-api-access-ts9w4\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/789d53a9-3224-4166-83b8-b242de99a397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869929 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-serving-cert\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869963 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8fb60f34-dde7-46b1-a726-b1d796ee0d34-images\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.869989 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzd67\" (UniqueName: \"kubernetes.io/projected/538a74fd-fc9a-49f8-83cc-c33a83d15081-kube-api-access-dzd67\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.870088 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcscq\" (UniqueName: \"kubernetes.io/projected/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-kube-api-access-kcscq\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.870289 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-client-ca\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.870442 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-metrics-certs\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.870560 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4f5f606-f217-4189-8d05-f97fe5aeabc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871106 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871388 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-kube-api-access-cnqcq\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871385 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-client-ca\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871444 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0cfa152-ddce-4531-9d48-07f06824d74b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871497 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4f5f606-f217-4189-8d05-f97fe5aeabc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871511 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8fb60f34-dde7-46b1-a726-b1d796ee0d34-images\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871545 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871583 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fdzqj\" (UID: \"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871741 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-ca\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.871959 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9829fed-e403-4cc4-b9be-88ca53421931-metrics-tls\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872045 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d69891-912a-4536-98d9-55ebf41a3ae1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872164 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebc7443-38b3-490b-bb56-4fade81f4779-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872182 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/046e5172-4319-4334-b411-b7308f761ed0-srv-cert\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbz9\" (UniqueName: \"kubernetes.io/projected/d9829fed-e403-4cc4-b9be-88ca53421931-kube-api-access-2pbz9\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872310 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-client-ca\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872331 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872400 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8wq\" (UniqueName: \"kubernetes.io/projected/6761c679-5216-4857-adda-6f032dffb3ad-kube-api-access-mb8wq\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/593cb82e-ba04-47e6-b240-308c22a14457-config\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872491 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317b3c90-5ae1-4952-b54d-92606226bd5c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-ca\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872582 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7398d291-531c-4828-b93d-81e238b3b6db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vq57m\" (UID: \"7398d291-531c-4828-b93d-81e238b3b6db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872646 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcx85\" (UniqueName: \"kubernetes.io/projected/7398d291-531c-4828-b93d-81e238b3b6db-kube-api-access-dcx85\") pod \"package-server-manager-789f6589d5-vq57m\" (UID: \"7398d291-531c-4828-b93d-81e238b3b6db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.872782 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-config\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873101 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317b3c90-5ae1-4952-b54d-92606226bd5c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873178 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8442e2a1-6734-46b7-a163-701037959001-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873233 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0cfa152-ddce-4531-9d48-07f06824d74b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873263 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b43ba-1b61-4984-af89-7e204e7074a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873288 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a138ebf-196c-4efa-aa3b-5c563c8209e5-serving-cert\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873312 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-machine-approver-tls\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873335 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/154530fd-6283-436e-8fa7-c7b891904e4d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873358 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873381 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-oauth-config\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873404 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873433 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/593cb82e-ba04-47e6-b240-308c22a14457-serving-cert\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873458 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fb60f34-dde7-46b1-a726-b1d796ee0d34-auth-proxy-config\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873441 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/593cb82e-ba04-47e6-b240-308c22a14457-config\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873484 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-service-ca\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873507 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-image-import-ca\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873529 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/978b98ab-c044-4683-aabd-5d2b4d3cab70-kube-api-access-ffjll\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873549 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-trusted-ca-bundle\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgkf\" (UniqueName: \"kubernetes.io/projected/93331b07-0680-4f0a-b2d6-e629aa6b207b-kube-api-access-4xgkf\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873577 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bab675e-e24a-43aa-abdd-0e657671535d-serving-cert\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.873603 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-config\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874021 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-oauth-serving-cert\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874234 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-etcd-serving-ca\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874297 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/046e5172-4319-4334-b411-b7308f761ed0-profile-collector-cert\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874356 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-auth-proxy-config\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874379 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/593cb82e-ba04-47e6-b240-308c22a14457-trusted-ca\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874442 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/232b5ebc-174d-4d6e-b172-854c92006872-signing-key\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874487 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8fb60f34-dde7-46b1-a726-b1d796ee0d34-auth-proxy-config\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874496 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-config\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874586 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44941c5b-19db-4444-81e0-5aa978534263-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874652 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttnp\" (UniqueName: \"kubernetes.io/projected/ef694d4b-2d06-4c04-924e-698540e3692a-kube-api-access-sttnp\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874717 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/978b98ab-c044-4683-aabd-5d2b4d3cab70-webhook-cert\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874766 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-config\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874777 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8442e2a1-6734-46b7-a163-701037959001-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.874783 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.374769222 +0000 UTC m=+136.577031994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-trusted-ca-bundle\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874870 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsnt\" (UniqueName: \"kubernetes.io/projected/3c992b2a-afc8-4b71-ae58-bb6748930ac9-kube-api-access-pcsnt\") pod \"multus-admission-controller-857f4d67dd-kzfdk\" (UID: \"3c992b2a-afc8-4b71-ae58-bb6748930ac9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874933 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab-metrics-tls\") pod \"dns-operator-744455d44c-jhchn\" (UID: \"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.874986 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.875044 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82a6ce2-925e-45e8-8286-932ecb869755-config\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.875181 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-mountpoint-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.875263 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b9p7\" (UniqueName: \"kubernetes.io/projected/5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60-kube-api-access-5b9p7\") pod \"cluster-samples-operator-665b6dd947-fdzqj\" (UID: \"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.875619 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-image-import-ca\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.875690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/154530fd-6283-436e-8fa7-c7b891904e4d-metrics-tls\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.876257 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44941c5b-19db-4444-81e0-5aa978534263-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.876364 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-etcd-serving-ca\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.876690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-oauth-serving-cert\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.876839 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-auth-proxy-config\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.878354 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7398d291-531c-4828-b93d-81e238b3b6db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vq57m\" (UID: \"7398d291-531c-4828-b93d-81e238b3b6db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.878897 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-config\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.879380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/046e5172-4319-4334-b411-b7308f761ed0-profile-collector-cert\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.879610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-serving-cert\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.880827 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68e6d11a-9d45-42a9-a366-ee3485704024-metrics-certs\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.886046 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-oauth-config\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.886329 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.886380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab-metrics-tls\") pod \"dns-operator-744455d44c-jhchn\" (UID: \"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.886612 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-machine-approver-tls\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.887059 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/232b5ebc-174d-4d6e-b172-854c92006872-signing-key\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.888899 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.899742 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44941c5b-19db-4444-81e0-5aa978534263-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.907118 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.927098 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.946354 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.968289 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976203 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.976383 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.476344533 +0000 UTC m=+136.678607315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976475 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-registration-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976521 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538a74fd-fc9a-49f8-83cc-c33a83d15081-secret-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976631 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef694d4b-2d06-4c04-924e-698540e3692a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976660 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-socket-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d69891-912a-4536-98d9-55ebf41a3ae1-config\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976787 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26a6323f-5055-4036-bc4e-0e1d9239ee73-cert\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976823 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gg96\" (UniqueName: \"kubernetes.io/projected/26a6323f-5055-4036-bc4e-0e1d9239ee73-kube-api-access-2gg96\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976847 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljcp4\" (UniqueName: \"kubernetes.io/projected/3529a429-628d-4c73-aaad-ee3719ea2022-kube-api-access-ljcp4\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976937 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef694d4b-2d06-4c04-924e-698540e3692a-proxy-tls\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976949 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-registration-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.976974 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2l9b\" (UniqueName: \"kubernetes.io/projected/d82a6ce2-925e-45e8-8286-932ecb869755-kube-api-access-p2l9b\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977080 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-certs\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977148 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-socket-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977178 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d69891-912a-4536-98d9-55ebf41a3ae1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977262 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cfa152-ddce-4531-9d48-07f06824d74b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977335 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-csi-data-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8442e2a1-6734-46b7-a163-701037959001-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977435 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6898\" (UniqueName: \"kubernetes.io/projected/8442e2a1-6734-46b7-a163-701037959001-kube-api-access-z6898\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977533 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzd67\" (UniqueName: \"kubernetes.io/projected/538a74fd-fc9a-49f8-83cc-c33a83d15081-kube-api-access-dzd67\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977597 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0cfa152-ddce-4531-9d48-07f06824d74b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977633 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-csi-data-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977657 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d69891-912a-4536-98d9-55ebf41a3ae1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977708 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9829fed-e403-4cc4-b9be-88ca53421931-metrics-tls\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977739 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbz9\" (UniqueName: \"kubernetes.io/projected/d9829fed-e403-4cc4-b9be-88ca53421931-kube-api-access-2pbz9\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977827 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-node-bootstrap-token\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977883 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8442e2a1-6734-46b7-a163-701037959001-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977959 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0cfa152-ddce-4531-9d48-07f06824d74b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.977995 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978028 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978035 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef694d4b-2d06-4c04-924e-698540e3692a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978203 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xgkf\" (UniqueName: \"kubernetes.io/projected/93331b07-0680-4f0a-b2d6-e629aa6b207b-kube-api-access-4xgkf\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978314 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sttnp\" (UniqueName: \"kubernetes.io/projected/ef694d4b-2d06-4c04-924e-698540e3692a-kube-api-access-sttnp\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978354 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8442e2a1-6734-46b7-a163-701037959001-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: E0126 14:11:58.978496 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.47848496 +0000 UTC m=+136.680747732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978567 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82a6ce2-925e-45e8-8286-932ecb869755-config\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978609 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-mountpoint-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978649 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3529a429-628d-4c73-aaad-ee3719ea2022-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978707 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9829fed-e403-4cc4-b9be-88ca53421931-config-volume\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978743 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d82a6ce2-925e-45e8-8286-932ecb869755-serving-cert\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978866 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmszr\" (UniqueName: \"kubernetes.io/projected/7d85b076-254e-4b43-aa7b-9dd8bca7a879-kube-api-access-tmszr\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.978999 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-plugins-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.979108 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-plugins-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.979307 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8442e2a1-6734-46b7-a163-701037959001-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.979405 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/93331b07-0680-4f0a-b2d6-e629aa6b207b-mountpoint-dir\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.984129 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538a74fd-fc9a-49f8-83cc-c33a83d15081-secret-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.986474 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 14:11:58 crc kubenswrapper[4922]: I0126 14:11:58.998478 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a138ebf-196c-4efa-aa3b-5c563c8209e5-serving-cert\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.006221 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.015700 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-client\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.027033 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.046134 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.055458 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-etcd-service-ca\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.065967 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.079974 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.080176 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.580133703 +0000 UTC m=+136.782396485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.080734 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-node-bootstrap-token\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.080911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.081325 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmszr\" (UniqueName: \"kubernetes.io/projected/7d85b076-254e-4b43-aa7b-9dd8bca7a879-kube-api-access-tmszr\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.081443 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.581426734 +0000 UTC m=+136.783689516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.081935 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-certs\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.086670 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.095299 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a138ebf-196c-4efa-aa3b-5c563c8209e5-config\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.106981 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.146795 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.155582 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/978b98ab-c044-4683-aabd-5d2b4d3cab70-apiservice-cert\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.160828 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/978b98ab-c044-4683-aabd-5d2b4d3cab70-webhook-cert\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.168113 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.178148 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/789d53a9-3224-4166-83b8-b242de99a397-srv-cert\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.185816 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.186177 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.686148773 +0000 UTC m=+136.888411585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.186301 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.187058 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.187599 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.687584269 +0000 UTC m=+136.889847211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.208590 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.216395 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc7443-38b3-490b-bb56-4fade81f4779-config\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.227715 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.240179 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.257490 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.266310 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.266370 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.286750 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.288358 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.288748 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.788655074 +0000 UTC m=+136.990917896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.290625 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.291183 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.791168003 +0000 UTC m=+136.993430775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.297853 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6761c679-5216-4857-adda-6f032dffb3ad-serving-cert\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.306980 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.327225 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.346674 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.366896 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.386918 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.392244 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.392432 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.892394952 +0000 UTC m=+137.094657764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.393010 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.393850 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.893814957 +0000 UTC m=+137.096077789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.406951 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.413155 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-config\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.427239 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.446561 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.452701 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-service-ca-bundle\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.467514 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.487930 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.494144 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661b43ba-1b61-4984-af89-7e204e7074a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.494430 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.494728 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.994700986 +0000 UTC m=+137.196963788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.495659 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.496130 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:11:59.99610789 +0000 UTC m=+137.198370662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.506721 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.515223 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661b43ba-1b61-4984-af89-7e204e7074a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.526523 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.553367 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.563201 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6761c679-5216-4857-adda-6f032dffb3ad-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.567850 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.587430 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.596734 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.596839 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.096815284 +0000 UTC m=+137.299078056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.597477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.597937 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.097927818 +0000 UTC m=+137.300190590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.606195 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.626868 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.635366 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.665054 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.667305 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.675755 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.676382 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.684223 4922 request.go:700] Waited for 1.007400396s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&limit=500&resourceVersion=0 Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.686335 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.695713 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.698926 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.699296 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.199266032 +0000 UTC m=+137.401528824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.699956 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.700700 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.200687667 +0000 UTC m=+137.402950449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.706132 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.716781 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.731037 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.742914 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.746223 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.758408 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.766268 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.774620 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.787700 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.792512 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-policies\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.801186 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.801773 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.301745081 +0000 UTC m=+137.504007863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.801885 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.802518 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.302508835 +0000 UTC m=+137.504771607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.806919 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.814404 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.827171 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.835092 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.846198 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.862714 4922 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.862838 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ebc7443-38b3-490b-bb56-4fade81f4779-serving-cert podName:5ebc7443-38b3-490b-bb56-4fade81f4779 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.36280784 +0000 UTC m=+137.565070642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5ebc7443-38b3-490b-bb56-4fade81f4779-serving-cert") pod "kube-controller-manager-operator-78b949d7b-d8tp2" (UID: "5ebc7443-38b3-490b-bb56-4fade81f4779") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.871899 4922 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.872011 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle podName:b7ff7450-4cc5-40df-a820-7cec4a3a9b95 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.371982977 +0000 UTC m=+137.574245749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-5lfvx" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95") : failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.872734 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.886961 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.903114 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.903341 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.403302601 +0000 UTC m=+137.605565373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.903822 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.904293 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.404278161 +0000 UTC m=+137.606540933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.906801 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.927167 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.948118 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.952781 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0cfa152-ddce-4531-9d48-07f06824d74b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.967312 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.969050 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cfa152-ddce-4531-9d48-07f06824d74b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977176 4922 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977303 4922 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977320 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26a6323f-5055-4036-bc4e-0e1d9239ee73-cert podName:26a6323f-5055-4036-bc4e-0e1d9239ee73 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.477284866 +0000 UTC m=+137.679547828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26a6323f-5055-4036-bc4e-0e1d9239ee73-cert") pod "ingress-canary-vzxs5" (UID: "26a6323f-5055-4036-bc4e-0e1d9239ee73") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977379 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef694d4b-2d06-4c04-924e-698540e3692a-proxy-tls podName:ef694d4b-2d06-4c04-924e-698540e3692a nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.477363548 +0000 UTC m=+137.679626320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ef694d4b-2d06-4c04-924e-698540e3692a-proxy-tls") pod "machine-config-controller-84d6567774-l6dpv" (UID: "ef694d4b-2d06-4c04-924e-698540e3692a") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977208 4922 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977478 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d69891-912a-4536-98d9-55ebf41a3ae1-config podName:17d69891-912a-4536-98d9-55ebf41a3ae1 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.477469221 +0000 UTC m=+137.679732443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/17d69891-912a-4536-98d9-55ebf41a3ae1-config") pod "kube-apiserver-operator-766d6c64bb-vlbbw" (UID: "17d69891-912a-4536-98d9-55ebf41a3ae1") : failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977872 4922 secret.go:188] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.977921 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8442e2a1-6734-46b7-a163-701037959001-image-registry-operator-tls podName:8442e2a1-6734-46b7-a163-701037959001 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.477910645 +0000 UTC m=+137.680173617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8442e2a1-6734-46b7-a163-701037959001-image-registry-operator-tls") pod "cluster-image-registry-operator-dc59b4c8b-r68vc" (UID: "8442e2a1-6734-46b7-a163-701037959001") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.978001 4922 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.978190 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17d69891-912a-4536-98d9-55ebf41a3ae1-serving-cert podName:17d69891-912a-4536-98d9-55ebf41a3ae1 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.478162483 +0000 UTC m=+137.680425255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17d69891-912a-4536-98d9-55ebf41a3ae1-serving-cert") pod "kube-apiserver-operator-766d6c64bb-vlbbw" (UID: "17d69891-912a-4536-98d9-55ebf41a3ae1") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.978356 4922 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.978453 4922 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.978482 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume podName:538a74fd-fc9a-49f8-83cc-c33a83d15081 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.478452112 +0000 UTC m=+137.680714884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume") pod "collect-profiles-29490600-74vcv" (UID: "538a74fd-fc9a-49f8-83cc-c33a83d15081") : failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.978508 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9829fed-e403-4cc4-b9be-88ca53421931-metrics-tls podName:d9829fed-e403-4cc4-b9be-88ca53421931 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.478497583 +0000 UTC m=+137.680760365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9829fed-e403-4cc4-b9be-88ca53421931-metrics-tls") pod "dns-default-qrlv6" (UID: "d9829fed-e403-4cc4-b9be-88ca53421931") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979427 4922 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979576 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d82a6ce2-925e-45e8-8286-932ecb869755-config podName:d82a6ce2-925e-45e8-8286-932ecb869755 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.479561536 +0000 UTC m=+137.681824308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d82a6ce2-925e-45e8-8286-932ecb869755-config") pod "service-ca-operator-777779d784-z7f85" (UID: "d82a6ce2-925e-45e8-8286-932ecb869755") : failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979447 4922 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979725 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9829fed-e403-4cc4-b9be-88ca53421931-config-volume podName:d9829fed-e403-4cc4-b9be-88ca53421931 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.479717661 +0000 UTC m=+137.681980433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d9829fed-e403-4cc4-b9be-88ca53421931-config-volume") pod "dns-default-qrlv6" (UID: "d9829fed-e403-4cc4-b9be-88ca53421931") : failed to sync configmap cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979470 4922 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979875 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82a6ce2-925e-45e8-8286-932ecb869755-serving-cert podName:d82a6ce2-925e-45e8-8286-932ecb869755 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.479867586 +0000 UTC m=+137.682130358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d82a6ce2-925e-45e8-8286-932ecb869755-serving-cert") pod "service-ca-operator-777779d784-z7f85" (UID: "d82a6ce2-925e-45e8-8286-932ecb869755") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.979471 4922 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: E0126 14:11:59.980107 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3529a429-628d-4c73-aaad-ee3719ea2022-control-plane-machine-set-operator-tls podName:3529a429-628d-4c73-aaad-ee3719ea2022 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.480099184 +0000 UTC m=+137.682361956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/3529a429-628d-4c73-aaad-ee3719ea2022-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-qhcrv" (UID: "3529a429-628d-4c73-aaad-ee3719ea2022") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:11:59 crc kubenswrapper[4922]: I0126 14:11:59.986017 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.005051 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.005273 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.505236773 +0000 UTC m=+137.707499545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.005457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.006024 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.506009858 +0000 UTC m=+137.708272630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.006744 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.026145 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.046168 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.067326 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.081518 4922 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.081657 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-node-bootstrap-token podName:7d85b076-254e-4b43-aa7b-9dd8bca7a879 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.581626413 +0000 UTC m=+137.783889225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-node-bootstrap-token") pod "machine-config-server-kchd7" (UID: "7d85b076-254e-4b43-aa7b-9dd8bca7a879") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.084547 4922 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.084820 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-certs podName:7d85b076-254e-4b43-aa7b-9dd8bca7a879 nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.58469894 +0000 UTC m=+137.786961752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-certs") pod "machine-config-server-kchd7" (UID: "7d85b076-254e-4b43-aa7b-9dd8bca7a879") : failed to sync secret cache: timed out waiting for the condition Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.087577 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.106590 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.106815 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.606775393 +0000 UTC m=+137.809038165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.107181 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.107661 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.60763996 +0000 UTC m=+137.809902922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.107955 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.126620 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.146949 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.168301 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.187362 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.206868 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.209003 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.209214 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.70918994 +0000 UTC m=+137.911452712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.209517 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.209864 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.709857611 +0000 UTC m=+137.912120383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.226816 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.246579 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.267778 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.287152 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.306862 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.311448 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.311678 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.811653339 +0000 UTC m=+138.013916111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.312595 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.313312 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.813276709 +0000 UTC m=+138.015539631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.344925 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp55n\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-kube-api-access-wp55n\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.345854 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.370837 4922 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.386729 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.413817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.414010 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.913971872 +0000 UTC m=+138.116234684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.414470 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebc7443-38b3-490b-bb56-4fade81f4779-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.414692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.414865 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.415840 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:00.915736058 +0000 UTC m=+138.117998860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.416751 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.418011 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebc7443-38b3-490b-bb56-4fade81f4779-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.434163 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92zpk\" (UniqueName: \"kubernetes.io/projected/fbcfbc26-d582-4311-af16-744933a25f5e-kube-api-access-92zpk\") pod \"apiserver-7bbb656c7d-q5z2b\" (UID: \"fbcfbc26-d582-4311-af16-744933a25f5e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.446722 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.454347 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-bound-sa-token\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.467484 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.487776 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.507111 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.516493 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.516727 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.016686079 +0000 UTC m=+138.218948861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.516950 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26a6323f-5055-4036-bc4e-0e1d9239ee73-cert\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517004 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d69891-912a-4536-98d9-55ebf41a3ae1-config\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517109 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef694d4b-2d06-4c04-924e-698540e3692a-proxy-tls\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517242 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8442e2a1-6734-46b7-a163-701037959001-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517348 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9829fed-e403-4cc4-b9be-88ca53421931-metrics-tls\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517379 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d69891-912a-4536-98d9-55ebf41a3ae1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517449 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517478 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517560 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82a6ce2-925e-45e8-8286-932ecb869755-config\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517613 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3529a429-628d-4c73-aaad-ee3719ea2022-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9829fed-e403-4cc4-b9be-88ca53421931-config-volume\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517672 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d82a6ce2-925e-45e8-8286-932ecb869755-serving-cert\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.517937 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d69891-912a-4536-98d9-55ebf41a3ae1-config\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.518365 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.018353842 +0000 UTC m=+138.220616624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.519183 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82a6ce2-925e-45e8-8286-932ecb869755-config\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.519827 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.521583 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26a6323f-5055-4036-bc4e-0e1d9239ee73-cert\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.522862 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3529a429-628d-4c73-aaad-ee3719ea2022-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.523170 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8442e2a1-6734-46b7-a163-701037959001-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.524675 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d82a6ce2-925e-45e8-8286-932ecb869755-serving-cert\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.524686 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d69891-912a-4536-98d9-55ebf41a3ae1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.527744 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.529724 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef694d4b-2d06-4c04-924e-698540e3692a-proxy-tls\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.533622 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9829fed-e403-4cc4-b9be-88ca53421931-metrics-tls\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.547123 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.549962 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9829fed-e403-4cc4-b9be-88ca53421931-config-volume\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.566313 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.587000 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.607292 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.619221 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.619556 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.119505949 +0000 UTC m=+138.321768751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.620259 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-certs\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.620712 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-node-bootstrap-token\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.620819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.621564 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.121532112 +0000 UTC m=+138.323794914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.624583 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-certs\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.628407 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.636212 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7d85b076-254e-4b43-aa7b-9dd8bca7a879-node-bootstrap-token\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.641794 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.664505 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5785\" (UniqueName: \"kubernetes.io/projected/7bab675e-e24a-43aa-abdd-0e657671535d-kube-api-access-q5785\") pod \"controller-manager-879f6c89f-jsdpn\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.684545 4922 request.go:700] Waited for 1.824275365s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.699971 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wmtm\" (UniqueName: \"kubernetes.io/projected/ded42282-8aa9-4480-923f-87fa83ed5e7e-kube-api-access-7wmtm\") pod \"downloads-7954f5f757-rxv7b\" (UID: \"ded42282-8aa9-4480-923f-87fa83ed5e7e\") " pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.712846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvznv\" (UniqueName: \"kubernetes.io/projected/2a138ebf-196c-4efa-aa3b-5c563c8209e5-kube-api-access-wvznv\") pod \"etcd-operator-b45778765-k55sj\" (UID: \"2a138ebf-196c-4efa-aa3b-5c563c8209e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.722392 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.722554 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwc7z\" (UniqueName: \"kubernetes.io/projected/317b3c90-5ae1-4952-b54d-92606226bd5c-kube-api-access-rwc7z\") pod \"openshift-apiserver-operator-796bbdcf4f-zkjrq\" (UID: \"317b3c90-5ae1-4952-b54d-92606226bd5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.722584 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.222546676 +0000 UTC m=+138.424809448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.723328 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.723841 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.223822776 +0000 UTC m=+138.426085548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.733264 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.745617 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzs6x\" (UniqueName: \"kubernetes.io/projected/a4f5f606-f217-4189-8d05-f97fe5aeabc2-kube-api-access-hzs6x\") pod \"openshift-config-operator-7777fb866f-r8trh\" (UID: \"a4f5f606-f217-4189-8d05-f97fe5aeabc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.763771 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqff\" (UniqueName: \"kubernetes.io/projected/232b5ebc-174d-4d6e-b172-854c92006872-kube-api-access-vwqff\") pod \"service-ca-9c57cc56f-5gw2z\" (UID: \"232b5ebc-174d-4d6e-b172-854c92006872\") " pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.774828 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.788563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6ghs\" (UniqueName: \"kubernetes.io/projected/8fb60f34-dde7-46b1-a726-b1d796ee0d34-kube-api-access-q6ghs\") pod \"machine-config-operator-74547568cd-trcgp\" (UID: \"8fb60f34-dde7-46b1-a726-b1d796ee0d34\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.803714 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6fb8\" (UniqueName: \"kubernetes.io/projected/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-kube-api-access-q6fb8\") pod \"oauth-openshift-558db77b4-5lfvx\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.819318 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.824496 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.824708 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzpf\" (UniqueName: \"kubernetes.io/projected/fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab-kube-api-access-fzzpf\") pod \"dns-operator-744455d44c-jhchn\" (UID: \"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.824962 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.324938822 +0000 UTC m=+138.527201604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.847941 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqjvt\" (UniqueName: \"kubernetes.io/projected/154530fd-6283-436e-8fa7-c7b891904e4d-kube-api-access-fqjvt\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.864957 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmlwm\" (UniqueName: \"kubernetes.io/projected/593cb82e-ba04-47e6-b240-308c22a14457-kube-api-access-kmlwm\") pod \"console-operator-58897d9998-75z9j\" (UID: \"593cb82e-ba04-47e6-b240-308c22a14457\") " pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.883049 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxldq\" (UniqueName: \"kubernetes.io/projected/cc1ae0ea-ace8-40a5-bc24-fc1363e898c1-kube-api-access-rxldq\") pod \"migrator-59844c95c7-jprp4\" (UID: \"cc1ae0ea-ace8-40a5-bc24-fc1363e898c1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.898628 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.904114 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.905345 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2lmx\" (UniqueName: \"kubernetes.io/projected/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-kube-api-access-t2lmx\") pod \"route-controller-manager-6576b87f9c-ft6sb\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.922459 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.927049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:00 crc kubenswrapper[4922]: E0126 14:12:00.927497 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.427480934 +0000 UTC m=+138.629743706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.930849 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmms9\" (UniqueName: \"kubernetes.io/projected/046e5172-4319-4334-b411-b7308f761ed0-kube-api-access-vmms9\") pod \"catalog-operator-68c6474976-zspft\" (UID: \"046e5172-4319-4334-b411-b7308f761ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.941689 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.947733 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k55sj"] Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.948537 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s95n9\" (UniqueName: \"kubernetes.io/projected/661b43ba-1b61-4984-af89-7e204e7074a8-kube-api-access-s95n9\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xkr4\" (UID: \"661b43ba-1b61-4984-af89-7e204e7074a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.953592 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.967559 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.971681 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncq9h\" (UniqueName: \"kubernetes.io/projected/44941c5b-19db-4444-81e0-5aa978534263-kube-api-access-ncq9h\") pod \"kube-storage-version-migrator-operator-b67b599dd-zdkj9\" (UID: \"44941c5b-19db-4444-81e0-5aa978534263\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:12:00 crc kubenswrapper[4922]: I0126 14:12:00.990302 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz96q\" (UniqueName: \"kubernetes.io/projected/789d53a9-3224-4166-83b8-b242de99a397-kube-api-access-kz96q\") pod \"olm-operator-6b444d44fb-pjjdt\" (UID: \"789d53a9-3224-4166-83b8-b242de99a397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.003804 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r8trh"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.008276 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p62r\" (UniqueName: \"kubernetes.io/projected/ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf-kube-api-access-9p62r\") pod \"apiserver-76f77b778f-9tg4w\" (UID: \"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf\") " pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.028651 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.028792 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvtbh\" (UniqueName: \"kubernetes.io/projected/3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f-kube-api-access-rvtbh\") pod \"machine-api-operator-5694c8668f-s5rq6\" (UID: \"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.029087 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.529050855 +0000 UTC m=+138.731313627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.045586 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.047729 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv28t\" (UniqueName: \"kubernetes.io/projected/12e31154-e0cc-4aa6-802b-31590a683866-kube-api-access-bv28t\") pod \"marketplace-operator-79b997595-26rnv\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.054426 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.065707 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5lfvx"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.071642 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.074135 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.077172 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9w4\" (UniqueName: \"kubernetes.io/projected/68e6d11a-9d45-42a9-a366-ee3485704024-kube-api-access-ts9w4\") pod \"router-default-5444994796-p2r7g\" (UID: \"68e6d11a-9d45-42a9-a366-ee3485704024\") " pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.081894 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4f5f606_f217_4189_8d05_f97fe5aeabc2.slice/crio-6e822becf45c8ffd4089594526f2417a0777d93535bc38e9ac1a6ff1a9d2277c WatchSource:0}: Error finding container 6e822becf45c8ffd4089594526f2417a0777d93535bc38e9ac1a6ff1a9d2277c: Status 404 returned error can't find the container with id 6e822becf45c8ffd4089594526f2417a0777d93535bc38e9ac1a6ff1a9d2277c Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.088414 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcscq\" (UniqueName: \"kubernetes.io/projected/a24e8826-6ea2-4a96-84a8-5ae5931b5a9d-kube-api-access-kcscq\") pod \"machine-approver-56656f9798-5mdjx\" (UID: \"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.088473 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.103375 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-kube-api-access-cnqcq\") pod \"console-f9d7485db-fd75n\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.104620 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.121310 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.125401 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebc7443-38b3-490b-bb56-4fade81f4779-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8tp2\" (UID: \"5ebc7443-38b3-490b-bb56-4fade81f4779\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.129724 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.130182 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.130760 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.630745208 +0000 UTC m=+138.833007970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.138248 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5gw2z"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.144736 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8wq\" (UniqueName: \"kubernetes.io/projected/6761c679-5216-4857-adda-6f032dffb3ad-kube-api-access-mb8wq\") pod \"authentication-operator-69f744f599-zkl86\" (UID: \"6761c679-5216-4857-adda-6f032dffb3ad\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.163451 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcx85\" (UniqueName: \"kubernetes.io/projected/7398d291-531c-4828-b93d-81e238b3b6db-kube-api-access-dcx85\") pod \"package-server-manager-789f6589d5-vq57m\" (UID: \"7398d291-531c-4828-b93d-81e238b3b6db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.178712 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.184732 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod232b5ebc_174d_4d6e_b172_854c92006872.slice/crio-006a9d7ed0f4f87689000c0c93ab75ea2846d2907749e34ec3d4ee79270c8a97 WatchSource:0}: Error finding container 006a9d7ed0f4f87689000c0c93ab75ea2846d2907749e34ec3d4ee79270c8a97: Status 404 returned error can't find the container with id 006a9d7ed0f4f87689000c0c93ab75ea2846d2907749e34ec3d4ee79270c8a97 Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.189103 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/154530fd-6283-436e-8fa7-c7b891904e4d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-29lxg\" (UID: \"154530fd-6283-436e-8fa7-c7b891904e4d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.207871 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/978b98ab-c044-4683-aabd-5d2b4d3cab70-kube-api-access-ffjll\") pod \"packageserver-d55dfcdfc-jfrvv\" (UID: \"978b98ab-c044-4683-aabd-5d2b4d3cab70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.219358 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.227613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsnt\" (UniqueName: \"kubernetes.io/projected/3c992b2a-afc8-4b71-ae58-bb6748930ac9-kube-api-access-pcsnt\") pod \"multus-admission-controller-857f4d67dd-kzfdk\" (UID: \"3c992b2a-afc8-4b71-ae58-bb6748930ac9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.230270 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.230881 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.231409 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.73138897 +0000 UTC m=+138.933651742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.246386 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.247827 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b9p7\" (UniqueName: \"kubernetes.io/projected/5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60-kube-api-access-5b9p7\") pod \"cluster-samples-operator-665b6dd947-fdzqj\" (UID: \"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.263551 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljcp4\" (UniqueName: \"kubernetes.io/projected/3529a429-628d-4c73-aaad-ee3719ea2022-kube-api-access-ljcp4\") pod \"control-plane-machine-set-operator-78cbb6b69f-qhcrv\" (UID: \"3529a429-628d-4c73-aaad-ee3719ea2022\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.270932 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rxv7b"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.275855 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.300314 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2l9b\" (UniqueName: \"kubernetes.io/projected/d82a6ce2-925e-45e8-8286-932ecb869755-kube-api-access-p2l9b\") pod \"service-ca-operator-777779d784-z7f85\" (UID: \"d82a6ce2-925e-45e8-8286-932ecb869755\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.301473 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.314629 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.316000 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gg96\" (UniqueName: \"kubernetes.io/projected/26a6323f-5055-4036-bc4e-0e1d9239ee73-kube-api-access-2gg96\") pod \"ingress-canary-vzxs5\" (UID: \"26a6323f-5055-4036-bc4e-0e1d9239ee73\") " pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.326651 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.332714 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d69891-912a-4536-98d9-55ebf41a3ae1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vlbbw\" (UID: \"17d69891-912a-4536-98d9-55ebf41a3ae1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.333261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.333730 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.833715634 +0000 UTC m=+139.035978406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.353634 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.360919 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6898\" (UniqueName: \"kubernetes.io/projected/8442e2a1-6734-46b7-a163-701037959001-kube-api-access-z6898\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.362086 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.374453 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzd67\" (UniqueName: \"kubernetes.io/projected/538a74fd-fc9a-49f8-83cc-c33a83d15081-kube-api-access-dzd67\") pod \"collect-profiles-29490600-74vcv\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.387986 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0cfa152-ddce-4531-9d48-07f06824d74b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2pbkx\" (UID: \"b0cfa152-ddce-4531-9d48-07f06824d74b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.397462 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.412026 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.415811 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbz9\" (UniqueName: \"kubernetes.io/projected/d9829fed-e403-4cc4-b9be-88ca53421931-kube-api-access-2pbz9\") pod \"dns-default-qrlv6\" (UID: \"d9829fed-e403-4cc4-b9be-88ca53421931\") " pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.428120 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.428944 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xgkf\" (UniqueName: \"kubernetes.io/projected/93331b07-0680-4f0a-b2d6-e629aa6b207b-kube-api-access-4xgkf\") pod \"csi-hostpathplugin-vglvd\" (UID: \"93331b07-0680-4f0a-b2d6-e629aa6b207b\") " pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.432001 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jsdpn"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.433101 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.435712 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.436263 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:01.936228684 +0000 UTC m=+139.138491456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.441328 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.445802 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.451846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.460110 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.461704 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-75z9j"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.464640 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8442e2a1-6734-46b7-a163-701037959001-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r68vc\" (UID: \"8442e2a1-6734-46b7-a163-701037959001\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.465566 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sttnp\" (UniqueName: \"kubernetes.io/projected/ef694d4b-2d06-4c04-924e-698540e3692a-kube-api-access-sttnp\") pod \"machine-config-controller-84d6567774-l6dpv\" (UID: \"ef694d4b-2d06-4c04-924e-698540e3692a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.469363 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.476274 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.481335 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmszr\" (UniqueName: \"kubernetes.io/projected/7d85b076-254e-4b43-aa7b-9dd8bca7a879-kube-api-access-tmszr\") pod \"machine-config-server-kchd7\" (UID: \"7d85b076-254e-4b43-aa7b-9dd8bca7a879\") " pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.491747 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.493016 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.506511 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.514296 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.515761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vzxs5" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.525666 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.535816 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kchd7" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.537187 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.537789 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.037766524 +0000 UTC m=+139.240029296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.638191 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.638421 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.138390405 +0000 UTC m=+139.340653177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.638561 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.638962 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.138954083 +0000 UTC m=+139.341216855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.723928 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24e8826_6ea2_4a96_84a8_5ae5931b5a9d.slice/crio-8d2f5e765795503dc66922c8a02853d49fbf71dc79dcfc30f534ff3e4524bf34 WatchSource:0}: Error finding container 8d2f5e765795503dc66922c8a02853d49fbf71dc79dcfc30f534ff3e4524bf34: Status 404 returned error can't find the container with id 8d2f5e765795503dc66922c8a02853d49fbf71dc79dcfc30f534ff3e4524bf34 Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.732615 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.739720 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.740323 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.240008007 +0000 UTC m=+139.442270769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.741825 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4"] Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.741909 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44941c5b_19db_4444_81e0_5aa978534263.slice/crio-4c082d590605261cafeff7e12a5fd0188b1aff90785a957bb763612ba5e78946 WatchSource:0}: Error finding container 4c082d590605261cafeff7e12a5fd0188b1aff90785a957bb763612ba5e78946: Status 404 returned error can't find the container with id 4c082d590605261cafeff7e12a5fd0188b1aff90785a957bb763612ba5e78946 Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.798271 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb"] Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.835848 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod789d53a9_3224_4166_83b8_b242de99a397.slice/crio-0329a1dda86c7eb9ef14885d93190eee51617aa9ecbeffbaaba52464e359c2a0 WatchSource:0}: Error finding container 0329a1dda86c7eb9ef14885d93190eee51617aa9ecbeffbaaba52464e359c2a0: Status 404 returned error can't find the container with id 0329a1dda86c7eb9ef14885d93190eee51617aa9ecbeffbaaba52464e359c2a0 Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.838098 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc1ae0ea_ace8_40a5_bc24_fc1363e898c1.slice/crio-dc94c90c876ad95e44a13a9e607408c90b0339d88f7427489b357fe11ee7323e WatchSource:0}: Error finding container dc94c90c876ad95e44a13a9e607408c90b0339d88f7427489b357fe11ee7323e: Status 404 returned error can't find the container with id dc94c90c876ad95e44a13a9e607408c90b0339d88f7427489b357fe11ee7323e Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.841308 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.841738 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.341719312 +0000 UTC m=+139.543982084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.870255 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s5rq6"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.872411 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.873290 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jhchn"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.882912 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26rnv"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.921880 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" event={"ID":"a4f5f606-f217-4189-8d05-f97fe5aeabc2","Type":"ContainerStarted","Data":"4e9dc1c7fefc64ce317721bc3278d41e7dfa23cfafa170388e45c941dec3a806"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.921939 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" event={"ID":"a4f5f606-f217-4189-8d05-f97fe5aeabc2","Type":"ContainerStarted","Data":"6e822becf45c8ffd4089594526f2417a0777d93535bc38e9ac1a6ff1a9d2277c"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.926676 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9tg4w"] Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.928332 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" event={"ID":"cc1ae0ea-ace8-40a5-bc24-fc1363e898c1","Type":"ContainerStarted","Data":"dc94c90c876ad95e44a13a9e607408c90b0339d88f7427489b357fe11ee7323e"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.940557 4922 generic.go:334] "Generic (PLEG): container finished" podID="fbcfbc26-d582-4311-af16-744933a25f5e" containerID="6f366c5cf43b20ded1972ead53c4aaa4389808e94bbc4ebfe666e44b0cfab00c" exitCode=0 Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.940720 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" event={"ID":"fbcfbc26-d582-4311-af16-744933a25f5e","Type":"ContainerDied","Data":"6f366c5cf43b20ded1972ead53c4aaa4389808e94bbc4ebfe666e44b0cfab00c"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.940752 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" event={"ID":"fbcfbc26-d582-4311-af16-744933a25f5e","Type":"ContainerStarted","Data":"cd8bb7a3f29f46a0999ed3cf05c45aab96c6a3e408e958b3737c6e8672c31cda"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.942125 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.942420 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.442383024 +0000 UTC m=+139.644645936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.942513 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:01 crc kubenswrapper[4922]: E0126 14:12:01.942938 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.442930281 +0000 UTC m=+139.645193053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.944889 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" event={"ID":"7bab675e-e24a-43aa-abdd-0e657671535d","Type":"ContainerStarted","Data":"80eb6e9955cdad4c49fa12d7931b2eead9c552f6de198bdbbe2cc52203d77687"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.946528 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" event={"ID":"317b3c90-5ae1-4952-b54d-92606226bd5c","Type":"ContainerStarted","Data":"d8f003f841b21e6d84221889e244f16a1a8a428954634cb072a9745aa15e3342"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.948381 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" event={"ID":"8fb60f34-dde7-46b1-a726-b1d796ee0d34","Type":"ContainerStarted","Data":"fb9d09ea337f1ffb20c6083502cf9496a9d25874f3982320fb7438787ab86f0f"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.951783 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" event={"ID":"232b5ebc-174d-4d6e-b172-854c92006872","Type":"ContainerStarted","Data":"5de6f57d7bb4de66cba075fa227d831a3a7a1d3ca6c763c29e372702177fa91c"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.951853 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" event={"ID":"232b5ebc-174d-4d6e-b172-854c92006872","Type":"ContainerStarted","Data":"006a9d7ed0f4f87689000c0c93ab75ea2846d2907749e34ec3d4ee79270c8a97"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.964844 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rxv7b" event={"ID":"ded42282-8aa9-4480-923f-87fa83ed5e7e","Type":"ContainerStarted","Data":"77c73e0caf5a03fdf5fbd295af9e97a42bd09a9dffb95ceed5f684fe14e7e7cf"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.968711 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" event={"ID":"b7ff7450-4cc5-40df-a820-7cec4a3a9b95","Type":"ContainerStarted","Data":"fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.968778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" event={"ID":"b7ff7450-4cc5-40df-a820-7cec4a3a9b95","Type":"ContainerStarted","Data":"e80546f5cc430fd1f1eb6fa156d4ba3412e71104c6a18efdfe4004a47f1032d7"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.969875 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:12:01 crc kubenswrapper[4922]: W0126 14:12:01.971192 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43e3ab3b_9e40_4388_a3d6_a3774fdc6bf3.slice/crio-57bf27040dc5d0c1e76f4fad3fa7c91b2872f0edad397962c90ec3fa2b7de4bd WatchSource:0}: Error finding container 57bf27040dc5d0c1e76f4fad3fa7c91b2872f0edad397962c90ec3fa2b7de4bd: Status 404 returned error can't find the container with id 57bf27040dc5d0c1e76f4fad3fa7c91b2872f0edad397962c90ec3fa2b7de4bd Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.974222 4922 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5lfvx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.42:6443/healthz\": dial tcp 10.217.0.42:6443: connect: connection refused" start-of-body= Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.974350 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" podUID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.42:6443/healthz\": dial tcp 10.217.0.42:6443: connect: connection refused" Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.975266 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-75z9j" event={"ID":"593cb82e-ba04-47e6-b240-308c22a14457","Type":"ContainerStarted","Data":"bdad27f2b966a307ea240e1cdfe8fd64e16ae59900e465489a0b0fc515d8320a"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.976488 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" event={"ID":"44941c5b-19db-4444-81e0-5aa978534263","Type":"ContainerStarted","Data":"4c082d590605261cafeff7e12a5fd0188b1aff90785a957bb763612ba5e78946"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.977989 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" event={"ID":"2a138ebf-196c-4efa-aa3b-5c563c8209e5","Type":"ContainerStarted","Data":"10ab3aa40d39e27ef5deede59046a4159ea4fe9d62106cbeab3e48caf6ef6ece"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.978012 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" event={"ID":"2a138ebf-196c-4efa-aa3b-5c563c8209e5","Type":"ContainerStarted","Data":"2017dcbdf907d5d56188d46c4436ef681febce255af5286b970bdd58e250b104"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.981641 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" event={"ID":"789d53a9-3224-4166-83b8-b242de99a397","Type":"ContainerStarted","Data":"0329a1dda86c7eb9ef14885d93190eee51617aa9ecbeffbaaba52464e359c2a0"} Jan 26 14:12:01 crc kubenswrapper[4922]: I0126 14:12:01.988934 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" event={"ID":"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d","Type":"ContainerStarted","Data":"8d2f5e765795503dc66922c8a02853d49fbf71dc79dcfc30f534ff3e4524bf34"} Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.004355 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.024726 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.044045 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.044403 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.544378738 +0000 UTC m=+139.746641510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.045754 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.049824 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.549805608 +0000 UTC m=+139.752068380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.155029 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.155221 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.655182789 +0000 UTC m=+139.857445561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.155799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.156135 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.656126728 +0000 UTC m=+139.858389490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: W0126 14:12:02.234095 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7398d291_531c_4828_b93d_81e238b3b6db.slice/crio-131efe075b4d0f16d4b9995d037f6520bb54a385885756dea7f3deeb6bef706f WatchSource:0}: Error finding container 131efe075b4d0f16d4b9995d037f6520bb54a385885756dea7f3deeb6bef706f: Status 404 returned error can't find the container with id 131efe075b4d0f16d4b9995d037f6520bb54a385885756dea7f3deeb6bef706f Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.256846 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.257273 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.757249305 +0000 UTC m=+139.959512077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.257722 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fd75n"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.359693 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.360140 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.860122406 +0000 UTC m=+140.062385178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.364724 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.405256 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.423375 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.462513 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.463004 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:02.962953597 +0000 UTC m=+140.165216369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.563994 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.564608 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.064592769 +0000 UTC m=+140.266855541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.664952 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.665452 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.165412556 +0000 UTC m=+140.367675328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.665643 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.666005 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.165991824 +0000 UTC m=+140.368254596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.766253 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.766674 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.266648836 +0000 UTC m=+140.468911618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.867396 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.867884 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.367866176 +0000 UTC m=+140.570128948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.870461 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qrlv6"] Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.968298 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:02 crc kubenswrapper[4922]: E0126 14:12:02.968707 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.468685753 +0000 UTC m=+140.670948525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.994873 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" event={"ID":"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab","Type":"ContainerStarted","Data":"51b5a144061a5fae7fd5b9aa36764651c4b8d322f86700b0b440c686f671cf9f"} Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.995870 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" event={"ID":"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf","Type":"ContainerStarted","Data":"367104a04e8416fe27976d94f4458093f61871f058289f143e612ed64de7f3d0"} Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.996584 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" event={"ID":"046e5172-4319-4334-b411-b7308f761ed0","Type":"ContainerStarted","Data":"48d69f6c35111e8db08652d0b8acca16e3324caeab6b6159ada2fd18a578eaf1"} Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.997252 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" event={"ID":"661b43ba-1b61-4984-af89-7e204e7074a8","Type":"ContainerStarted","Data":"31c4e4d7facc7bfa849f1bb2f253854a4719c177464332bfff58aa775020a228"} Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.998435 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" event={"ID":"12e31154-e0cc-4aa6-802b-31590a683866","Type":"ContainerStarted","Data":"ad0f4cf1b7540d0ffd2f7b3e6f29ba02f2ff9cebcdbdd6473d67363d4668f8fd"} Jan 26 14:12:02 crc kubenswrapper[4922]: I0126 14:12:02.999409 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p2r7g" event={"ID":"68e6d11a-9d45-42a9-a366-ee3485704024","Type":"ContainerStarted","Data":"507b387f47cb38676f9cc35b5c08bed7fa0b2a6afdc7ffb012e9dd3230b46824"} Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.000301 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" event={"ID":"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3","Type":"ContainerStarted","Data":"57bf27040dc5d0c1e76f4fad3fa7c91b2872f0edad397962c90ec3fa2b7de4bd"} Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.003290 4922 generic.go:334] "Generic (PLEG): container finished" podID="a4f5f606-f217-4189-8d05-f97fe5aeabc2" containerID="4e9dc1c7fefc64ce317721bc3278d41e7dfa23cfafa170388e45c941dec3a806" exitCode=0 Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.003652 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" event={"ID":"a4f5f606-f217-4189-8d05-f97fe5aeabc2","Type":"ContainerDied","Data":"4e9dc1c7fefc64ce317721bc3278d41e7dfa23cfafa170388e45c941dec3a806"} Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.005454 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" event={"ID":"7398d291-531c-4828-b93d-81e238b3b6db","Type":"ContainerStarted","Data":"131efe075b4d0f16d4b9995d037f6520bb54a385885756dea7f3deeb6bef706f"} Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.006882 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" event={"ID":"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f","Type":"ContainerStarted","Data":"4f60ceb301ca47118ec9c8cc37535660394fdfa6a68ff299b10b2a97224ce664"} Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.008272 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kchd7" event={"ID":"7d85b076-254e-4b43-aa7b-9dd8bca7a879","Type":"ContainerStarted","Data":"61265514f597596663e7fb249321c913665cfc228b5468c10abb1b7bb295f452"} Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.032331 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv"] Jan 26 14:12:03 crc kubenswrapper[4922]: W0126 14:12:03.050951 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e8d6c7_f2f5_465e_93ee_e4eead1f58c4.slice/crio-4cd16abca1a4a6cb4698b3068a4d9b08b896cf9adcfc96fa80544ddb46da015c WatchSource:0}: Error finding container 4cd16abca1a4a6cb4698b3068a4d9b08b896cf9adcfc96fa80544ddb46da015c: Status 404 returned error can't find the container with id 4cd16abca1a4a6cb4698b3068a4d9b08b896cf9adcfc96fa80544ddb46da015c Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.070428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.074186 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.574166766 +0000 UTC m=+140.776429528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.085972 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.093950 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kzfdk"] Jan 26 14:12:03 crc kubenswrapper[4922]: W0126 14:12:03.104400 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod154530fd_6283_436e_8fa7_c7b891904e4d.slice/crio-2accb0666b5ab9ac1e336519b98ec06fa4cda268c2c59515f131a84431ba0425 WatchSource:0}: Error finding container 2accb0666b5ab9ac1e336519b98ec06fa4cda268c2c59515f131a84431ba0425: Status 404 returned error can't find the container with id 2accb0666b5ab9ac1e336519b98ec06fa4cda268c2c59515f131a84431ba0425 Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.113482 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.113530 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z7f85"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.120240 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.121620 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vglvd"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.129748 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zkl86"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.146578 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" podStartSLOduration=121.146560951 podStartE2EDuration="2m1.146560951s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:03.145028942 +0000 UTC m=+140.347291714" watchObservedRunningTime="2026-01-26 14:12:03.146560951 +0000 UTC m=+140.348823723" Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.171575 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.171944 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.671920777 +0000 UTC m=+140.874183549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.172321 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vzxs5"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.185571 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.196116 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.204573 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv"] Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.223880 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5gw2z" podStartSLOduration=120.223855028 podStartE2EDuration="2m0.223855028s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:03.218700137 +0000 UTC m=+140.420962909" watchObservedRunningTime="2026-01-26 14:12:03.223855028 +0000 UTC m=+140.426117800" Jan 26 14:12:03 crc kubenswrapper[4922]: W0126 14:12:03.236095 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8442e2a1_6734_46b7_a163_701037959001.slice/crio-8fc2e21f15167af046fcec75216fce8de48a3b73b567ccc57896bf0b39a055e9 WatchSource:0}: Error finding container 8fc2e21f15167af046fcec75216fce8de48a3b73b567ccc57896bf0b39a055e9: Status 404 returned error can't find the container with id 8fc2e21f15167af046fcec75216fce8de48a3b73b567ccc57896bf0b39a055e9 Jan 26 14:12:03 crc kubenswrapper[4922]: W0126 14:12:03.248032 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6761c679_5216_4857_adda_6f032dffb3ad.slice/crio-45a5407c1bf1f58312853374836240a00882518c9c5472287f6adb3cf71b930a WatchSource:0}: Error finding container 45a5407c1bf1f58312853374836240a00882518c9c5472287f6adb3cf71b930a: Status 404 returned error can't find the container with id 45a5407c1bf1f58312853374836240a00882518c9c5472287f6adb3cf71b930a Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.266030 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k55sj" podStartSLOduration=120.266003672 podStartE2EDuration="2m0.266003672s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:03.263232775 +0000 UTC m=+140.465495567" watchObservedRunningTime="2026-01-26 14:12:03.266003672 +0000 UTC m=+140.468266434" Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.278928 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.279406 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.779383813 +0000 UTC m=+140.981646765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.379503 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.380389 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.880356615 +0000 UTC m=+141.082619387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.381441 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.381772 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.881764539 +0000 UTC m=+141.084027311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.444300 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.483728 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.483966 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.983935118 +0000 UTC m=+141.186197880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.484279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.484679 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:03.984662092 +0000 UTC m=+141.186924864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.585755 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.085732447 +0000 UTC m=+141.287995219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.585932 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.586428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.586776 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.086769089 +0000 UTC m=+141.289031861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: W0126 14:12:03.647564 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0cfa152_ddce_4531_9d48_07f06824d74b.slice/crio-53211c3202c939de74773b293d5963a0c9c2249a0632bef2eead47b2f745113a WatchSource:0}: Error finding container 53211c3202c939de74773b293d5963a0c9c2249a0632bef2eead47b2f745113a: Status 404 returned error can't find the container with id 53211c3202c939de74773b293d5963a0c9c2249a0632bef2eead47b2f745113a Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.688391 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.688688 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.188652439 +0000 UTC m=+141.390915211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.689242 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.689712 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.189694502 +0000 UTC m=+141.391957274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.791184 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.791694 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.291669295 +0000 UTC m=+141.493932067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.892495 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:03 crc kubenswrapper[4922]: E0126 14:12:03.893239 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.393220435 +0000 UTC m=+141.595483207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:03 crc kubenswrapper[4922]: I0126 14:12:03.998570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:03.998953 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.498911406 +0000 UTC m=+141.701174188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:03.999755 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.000265 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.500254028 +0000 UTC m=+141.702516810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.068762 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" event={"ID":"ef694d4b-2d06-4c04-924e-698540e3692a","Type":"ContainerStarted","Data":"c3b56d42146ee03fed27bd71a8a083cf47db6246367e59d8fbe31990a62b669f"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.081135 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" event={"ID":"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60","Type":"ContainerStarted","Data":"8dc6c9603674b88f9d771fd4d5277de961d5c795f0520cf2034b9d9593b4857f"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.101054 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.103651 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.603625975 +0000 UTC m=+141.805888747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.113492 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" event={"ID":"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab","Type":"ContainerStarted","Data":"4ddc73a50e5e2620ad9e7f27a095ceccb93a981b995ec9623125a363223eacde"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.160547 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" event={"ID":"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f","Type":"ContainerStarted","Data":"5d1781fddac5fd27a73e619583c3955412b7f6384f774a2ea8ad9d276f7897f8"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.187379 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vzxs5" event={"ID":"26a6323f-5055-4036-bc4e-0e1d9239ee73","Type":"ContainerStarted","Data":"ae11be05113149b80772655ae3e7e39919644bfca213d54b1883a21bdd573d51"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.202891 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" event={"ID":"661b43ba-1b61-4984-af89-7e204e7074a8","Type":"ContainerStarted","Data":"3b635a6de0b742a328f432d79e80501d29678ca09b56a77b00d500b6d75c5964"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.204575 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.204999 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.704979029 +0000 UTC m=+141.907241801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.206846 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrlv6" event={"ID":"d9829fed-e403-4cc4-b9be-88ca53421931","Type":"ContainerStarted","Data":"2a78bdf412cca03a7f5e5a127eea7b465f0bd7ab3f6d07acb682781af9a512b0"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.211938 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" event={"ID":"8442e2a1-6734-46b7-a163-701037959001","Type":"ContainerStarted","Data":"8fc2e21f15167af046fcec75216fce8de48a3b73b567ccc57896bf0b39a055e9"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.228290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" event={"ID":"93331b07-0680-4f0a-b2d6-e629aa6b207b","Type":"ContainerStarted","Data":"de04d7b4b1afd4d4e7e2aa7d6345c21f77e3143a01ea7346020246279991c70e"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.232024 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xkr4" podStartSLOduration=121.231987457 podStartE2EDuration="2m1.231987457s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.223543952 +0000 UTC m=+141.425806724" watchObservedRunningTime="2026-01-26 14:12:04.231987457 +0000 UTC m=+141.434250229" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.233022 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" event={"ID":"154530fd-6283-436e-8fa7-c7b891904e4d","Type":"ContainerStarted","Data":"2accb0666b5ab9ac1e336519b98ec06fa4cda268c2c59515f131a84431ba0425"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.249585 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" event={"ID":"538a74fd-fc9a-49f8-83cc-c33a83d15081","Type":"ContainerStarted","Data":"6aa64f76217f806bc2877ba325973e8402e9025b291e466e580bf8dc15fb863a"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.270279 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rxv7b" event={"ID":"ded42282-8aa9-4480-923f-87fa83ed5e7e","Type":"ContainerStarted","Data":"5776c81f4428acb4e0e4c882cf408220a28e689da26f5ff57984213b50d7af42"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.271925 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.276267 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-rxv7b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.276333 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rxv7b" podUID="ded42282-8aa9-4480-923f-87fa83ed5e7e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.318315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" event={"ID":"17d69891-912a-4536-98d9-55ebf41a3ae1","Type":"ContainerStarted","Data":"ee1e44846772addb811a642a06c3d841a8d38ad8a9198cdde699a12682404a09"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.322813 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.323339 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.823310346 +0000 UTC m=+142.025573118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.331766 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" event={"ID":"3c992b2a-afc8-4b71-ae58-bb6748930ac9","Type":"ContainerStarted","Data":"c9ee2138719a401330bbad8f47cb0524d3ac1de22454e71a0f998cf5d4a91733"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.352419 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fd75n" event={"ID":"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4","Type":"ContainerStarted","Data":"4cd16abca1a4a6cb4698b3068a4d9b08b896cf9adcfc96fa80544ddb46da015c"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.360926 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" event={"ID":"8fb60f34-dde7-46b1-a726-b1d796ee0d34","Type":"ContainerStarted","Data":"d81696c169c0d3bf74bf8e8759305054cefab460e087ef596e1c4f4db89859e2"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.387434 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" event={"ID":"d82a6ce2-925e-45e8-8286-932ecb869755","Type":"ContainerStarted","Data":"9f7f71dcfacfba4cea6988b661fb8b7cf321cbfa0ac24b62ed353dc3dacbc60c"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.393998 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" event={"ID":"6761c679-5216-4857-adda-6f032dffb3ad","Type":"ContainerStarted","Data":"45a5407c1bf1f58312853374836240a00882518c9c5472287f6adb3cf71b930a"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.432047 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.432655 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:04.932642261 +0000 UTC m=+142.134905033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.433029 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" event={"ID":"b0cfa152-ddce-4531-9d48-07f06824d74b","Type":"ContainerStarted","Data":"53211c3202c939de74773b293d5963a0c9c2249a0632bef2eead47b2f745113a"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.451432 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" event={"ID":"978b98ab-c044-4683-aabd-5d2b4d3cab70","Type":"ContainerStarted","Data":"8805bdf1bf36342df5a4312430d096cc0aae6b9430497b923ff653783617b4ff"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.459633 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" event={"ID":"7bab675e-e24a-43aa-abdd-0e657671535d","Type":"ContainerStarted","Data":"c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.460168 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.465786 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" event={"ID":"5ebc7443-38b3-490b-bb56-4fade81f4779","Type":"ContainerStarted","Data":"18ccc625c97f34df27cb08aecdb9982e0f10f00cff68d217131e18a149261c2a"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.468877 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.470996 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" event={"ID":"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d","Type":"ContainerStarted","Data":"a901cd9dcbbe8f8c8e6e632a16b7cd6241f7be8be155a56188db13f4aba5e020"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.476376 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" event={"ID":"789d53a9-3224-4166-83b8-b242de99a397","Type":"ContainerStarted","Data":"54f95123417f640b4f17c74e120cd30d8237a1bb205f3e1fa4d9305c5828c417"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.476992 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.480424 4922 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pjjdt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.480475 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" podUID="789d53a9-3224-4166-83b8-b242de99a397" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.483674 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" event={"ID":"317b3c90-5ae1-4952-b54d-92606226bd5c","Type":"ContainerStarted","Data":"bc6962ffa91bcf04125bdc2eb1d2881f3a1ecbb1853f69ffb8331f35a592974b"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.485972 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rxv7b" podStartSLOduration=121.485954255 podStartE2EDuration="2m1.485954255s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.31801039 +0000 UTC m=+141.520273172" watchObservedRunningTime="2026-01-26 14:12:04.485954255 +0000 UTC m=+141.688217027" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.487014 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" podStartSLOduration=121.487009758 podStartE2EDuration="2m1.487009758s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.48610137 +0000 UTC m=+141.688364152" watchObservedRunningTime="2026-01-26 14:12:04.487009758 +0000 UTC m=+141.689272530" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.509284 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" event={"ID":"cc1ae0ea-ace8-40a5-bc24-fc1363e898c1","Type":"ContainerStarted","Data":"d1eb88b586cc283ad82994db91815f0b7b43b7fe559a65a4a4e43a51f6a7be09"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.533263 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" event={"ID":"3529a429-628d-4c73-aaad-ee3719ea2022","Type":"ContainerStarted","Data":"441f33f8109ba5cba1c521b7508b3b4977b4e3c60c4483da365ff877de8a121c"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.534195 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.535687 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.035669327 +0000 UTC m=+142.237932099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.575437 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" podStartSLOduration=121.575417356 podStartE2EDuration="2m1.575417356s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.512180359 +0000 UTC m=+141.714443131" watchObservedRunningTime="2026-01-26 14:12:04.575417356 +0000 UTC m=+141.777680128" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.594829 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-75z9j" event={"ID":"593cb82e-ba04-47e6-b240-308c22a14457","Type":"ContainerStarted","Data":"974fec7f2592f63622688e062a27db882bb0ce8733ebdf1ff0f52d298bac9ca4"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.596165 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.637386 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.639041 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.139024824 +0000 UTC m=+142.341287586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.643473 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" event={"ID":"44941c5b-19db-4444-81e0-5aa978534263","Type":"ContainerStarted","Data":"442a3268fc1ef0a6e4ddb97db7c0bcd06b954e65c86fef622eea755713de65f7"} Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.689160 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zkjrq" podStartSLOduration=122.689124217 podStartE2EDuration="2m2.689124217s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.588172476 +0000 UTC m=+141.790435248" watchObservedRunningTime="2026-01-26 14:12:04.689124217 +0000 UTC m=+141.891386989" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.743822 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zdkj9" podStartSLOduration=121.743792005 podStartE2EDuration="2m1.743792005s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.735577397 +0000 UTC m=+141.937840169" watchObservedRunningTime="2026-01-26 14:12:04.743792005 +0000 UTC m=+141.946054777" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.746273 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.746560 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.246534681 +0000 UTC m=+142.448797453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.747215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.749329 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.249318728 +0000 UTC m=+142.451581500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.844479 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-75z9j" podStartSLOduration=121.844447137 podStartE2EDuration="2m1.844447137s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:04.82834008 +0000 UTC m=+142.030602852" watchObservedRunningTime="2026-01-26 14:12:04.844447137 +0000 UTC m=+142.046709909" Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.868895 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.869870 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.369840484 +0000 UTC m=+142.572103256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:04 crc kubenswrapper[4922]: I0126 14:12:04.971332 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:04 crc kubenswrapper[4922]: E0126 14:12:04.971675 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.471661742 +0000 UTC m=+142.673924514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.072262 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.072619 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.572549882 +0000 UTC m=+142.774812664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.072909 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.073283 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.573269135 +0000 UTC m=+142.775531907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.173777 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.174173 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.674149984 +0000 UTC m=+142.876412756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.275155 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.275886 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.775873919 +0000 UTC m=+142.978136691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.377271 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.377783 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.877756769 +0000 UTC m=+143.080019541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.479494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.479976 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:05.97996098 +0000 UTC m=+143.182223752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.536986 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-75z9j" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.581790 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.582178 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.08215674 +0000 UTC m=+143.284419512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.684237 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.684702 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.184687131 +0000 UTC m=+143.386949903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.718050 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" event={"ID":"046e5172-4319-4334-b411-b7308f761ed0","Type":"ContainerStarted","Data":"fbd1cd2022d5b71fbe1a769ecff1cc0c8ff11a1fe6e5cc6533889ff4208db2ff"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.718969 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.740034 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" event={"ID":"fbcfbc26-d582-4311-af16-744933a25f5e","Type":"ContainerStarted","Data":"0cd374e66f104dc9aec4c661bed95c6294d475c283989505f7011c56276fa327"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.761594 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" podStartSLOduration=122.761548935 podStartE2EDuration="2m2.761548935s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:05.760757 +0000 UTC m=+142.963019782" watchObservedRunningTime="2026-01-26 14:12:05.761548935 +0000 UTC m=+142.963811707" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.764293 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" event={"ID":"7398d291-531c-4828-b93d-81e238b3b6db","Type":"ContainerStarted","Data":"8e40fd38b694127b7f8469b78afde55e440869ef6159d421dc63060d26026633"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.785678 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.787116 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.287096127 +0000 UTC m=+143.489358899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.788361 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" event={"ID":"3529a429-628d-4c73-aaad-ee3719ea2022","Type":"ContainerStarted","Data":"99a85c99a7ff8440a3a8d7920c73533b465ded949e0bf84af560dcc66730a876"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.792654 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zspft" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.846196 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" podStartSLOduration=122.846167063 podStartE2EDuration="2m2.846167063s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:05.811257997 +0000 UTC m=+143.013520769" watchObservedRunningTime="2026-01-26 14:12:05.846167063 +0000 UTC m=+143.048429835" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.848135 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" event={"ID":"8fb60f34-dde7-46b1-a726-b1d796ee0d34","Type":"ContainerStarted","Data":"62eba9ea4c5d2e38a9c537fd26effbec40c5a7a9cfcaa882272058ccea1a908f"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.878335 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" event={"ID":"a4f5f606-f217-4189-8d05-f97fe5aeabc2","Type":"ContainerStarted","Data":"620f8f47a17b190e3cd582feb522a5e19b4ad7278f472bba8d8a6c5c9559bac5"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.878409 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.888557 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.899495 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.399473728 +0000 UTC m=+143.601736500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.914660 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" event={"ID":"12e31154-e0cc-4aa6-802b-31590a683866","Type":"ContainerStarted","Data":"9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.915739 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.930043 4922 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-26rnv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.930124 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" podUID="12e31154-e0cc-4aa6-802b-31590a683866" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.934330 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kchd7" event={"ID":"7d85b076-254e-4b43-aa7b-9dd8bca7a879","Type":"ContainerStarted","Data":"8532bf3c89e4ddde4508ef2a85bbd5569340e0bea692e67e9e93fb082e7b9482"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.954630 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" event={"ID":"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3","Type":"ContainerStarted","Data":"1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249"} Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.955932 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.970519 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qhcrv" podStartSLOduration=122.970492069 podStartE2EDuration="2m2.970492069s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:05.929506121 +0000 UTC m=+143.131768893" watchObservedRunningTime="2026-01-26 14:12:05.970492069 +0000 UTC m=+143.172754841" Jan 26 14:12:05 crc kubenswrapper[4922]: I0126 14:12:05.990548 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:05 crc kubenswrapper[4922]: E0126 14:12:05.993201 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.493164581 +0000 UTC m=+143.695427353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.014975 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vzxs5" event={"ID":"26a6323f-5055-4036-bc4e-0e1d9239ee73","Type":"ContainerStarted","Data":"22cef43aeebc15ddaf71fcc84a79948dbed51397c781d915e77e717daa724176"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.028361 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" podStartSLOduration=123.028333056 podStartE2EDuration="2m3.028333056s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:05.969549389 +0000 UTC m=+143.171812161" watchObservedRunningTime="2026-01-26 14:12:06.028333056 +0000 UTC m=+143.230595818" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.030323 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-trcgp" podStartSLOduration=123.030316638 podStartE2EDuration="2m3.030316638s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.020933013 +0000 UTC m=+143.223195785" watchObservedRunningTime="2026-01-26 14:12:06.030316638 +0000 UTC m=+143.232579400" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.037647 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" event={"ID":"8442e2a1-6734-46b7-a163-701037959001","Type":"ContainerStarted","Data":"defd3b1603420bfbe55d2158c0a1d95522d7f05aacbe1aedb6259a12749c25dc"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.095194 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.098779 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.598763188 +0000 UTC m=+143.801025960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.100668 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" podStartSLOduration=123.100650367 podStartE2EDuration="2m3.100650367s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.090380825 +0000 UTC m=+143.292643597" watchObservedRunningTime="2026-01-26 14:12:06.100650367 +0000 UTC m=+143.302913139" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.127426 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" event={"ID":"3c992b2a-afc8-4b71-ae58-bb6748930ac9","Type":"ContainerStarted","Data":"119c04ac0cd48d816631e0cfbf5b83dc1c6e169f48ea7347df264a954d563b91"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.179629 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.206949 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vzxs5" podStartSLOduration=8.206926095 podStartE2EDuration="8.206926095s" podCreationTimestamp="2026-01-26 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.203846489 +0000 UTC m=+143.406109271" watchObservedRunningTime="2026-01-26 14:12:06.206926095 +0000 UTC m=+143.409188867" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.210625 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fd75n" event={"ID":"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4","Type":"ContainerStarted","Data":"03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.222136 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.222690 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.72266931 +0000 UTC m=+143.924932082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.288642 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p2r7g" event={"ID":"68e6d11a-9d45-42a9-a366-ee3485704024","Type":"ContainerStarted","Data":"af38862611d80e76727883b973441c7f9ef7e13080054322eb1d571cd7f98965"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.345026 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.349422 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" event={"ID":"17d69891-912a-4536-98d9-55ebf41a3ae1","Type":"ContainerStarted","Data":"53c11ca65bec9aec5e2a21269e014c9307c5c7f91da1a99e9aaa29c5b05bc95a"} Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.350716 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.850700912 +0000 UTC m=+144.052963694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.355222 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.367249 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:06 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:06 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:06 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.367311 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.378939 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r68vc" podStartSLOduration=123.378921479 podStartE2EDuration="2m3.378921479s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.376023818 +0000 UTC m=+143.578286600" watchObservedRunningTime="2026-01-26 14:12:06.378921479 +0000 UTC m=+143.581184251" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.421475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" event={"ID":"ef694d4b-2d06-4c04-924e-698540e3692a","Type":"ContainerStarted","Data":"6324420b8d9d8d21c23b052e0423aca554777a36d806f5c7753fad41140948a0"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.453701 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" event={"ID":"d82a6ce2-925e-45e8-8286-932ecb869755","Type":"ContainerStarted","Data":"cbe4418f988700356b850c63e2570782db3deadbf6d0ba5ecdf4e09877d70927"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.459353 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.459734 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.959691235 +0000 UTC m=+144.161954007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.460022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.462052 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:06.962044149 +0000 UTC m=+144.164306921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.497213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrlv6" event={"ID":"d9829fed-e403-4cc4-b9be-88ca53421931","Type":"ContainerStarted","Data":"fecde25d202224a036c993beb359b9552cf00b991e80f77e1e58353f4cc6a531"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.507177 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.510442 4922 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jfrvv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.510597 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" podUID="978b98ab-c044-4683-aabd-5d2b4d3cab70" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.518632 4922 generic.go:334] "Generic (PLEG): container finished" podID="ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf" containerID="4422e631f882903431941bf0a6ea371931547f83a0ae445f78b3dc8b030abedd" exitCode=0 Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.520247 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" event={"ID":"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf","Type":"ContainerDied","Data":"4422e631f882903431941bf0a6ea371931547f83a0ae445f78b3dc8b030abedd"} Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.523909 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-rxv7b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.530624 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rxv7b" podUID="ded42282-8aa9-4480-923f-87fa83ed5e7e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.542157 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjjdt" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.562667 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.562910 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.062877397 +0000 UTC m=+144.265140169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.564169 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.567723 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.067693688 +0000 UTC m=+144.269956460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.576179 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kchd7" podStartSLOduration=8.576146504 podStartE2EDuration="8.576146504s" podCreationTimestamp="2026-01-26 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.486949192 +0000 UTC m=+143.689211964" watchObservedRunningTime="2026-01-26 14:12:06.576146504 +0000 UTC m=+143.778409276" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.577304 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" podStartSLOduration=123.577294189 podStartE2EDuration="2m3.577294189s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.573569613 +0000 UTC m=+143.775832385" watchObservedRunningTime="2026-01-26 14:12:06.577294189 +0000 UTC m=+143.779556961" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.668356 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.669247 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.169217297 +0000 UTC m=+144.371480069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.669826 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.673857 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.173837713 +0000 UTC m=+144.376100475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.750608 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vlbbw" podStartSLOduration=123.750583633 podStartE2EDuration="2m3.750583633s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.74125514 +0000 UTC m=+143.943517912" watchObservedRunningTime="2026-01-26 14:12:06.750583633 +0000 UTC m=+143.952846405" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.774161 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.774704 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.274686461 +0000 UTC m=+144.476949233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.875042 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.875731 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.375718474 +0000 UTC m=+144.577981246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.896433 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-p2r7g" podStartSLOduration=123.896397243 podStartE2EDuration="2m3.896397243s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.786650346 +0000 UTC m=+143.988913118" watchObservedRunningTime="2026-01-26 14:12:06.896397243 +0000 UTC m=+144.098660015" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.953404 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z7f85" podStartSLOduration=123.953381654 podStartE2EDuration="2m3.953381654s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:06.947365775 +0000 UTC m=+144.149628537" watchObservedRunningTime="2026-01-26 14:12:06.953381654 +0000 UTC m=+144.155644446" Jan 26 14:12:06 crc kubenswrapper[4922]: I0126 14:12:06.976786 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:06 crc kubenswrapper[4922]: E0126 14:12:06.977162 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.47714692 +0000 UTC m=+144.679409692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.077890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.078658 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.578643598 +0000 UTC m=+144.780906370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.187280 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.188133 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.688093687 +0000 UTC m=+144.890356459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.213798 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" podStartSLOduration=124.213782254 podStartE2EDuration="2m4.213782254s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:07.200233918 +0000 UTC m=+144.402496690" watchObservedRunningTime="2026-01-26 14:12:07.213782254 +0000 UTC m=+144.416045026" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.214926 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l7xmz"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.215813 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.221261 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.231587 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7xmz"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.289310 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-catalog-content\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.289406 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdqbl\" (UniqueName: \"kubernetes.io/projected/18f19460-3c63-42ea-b891-10d9b8a36e2e-kube-api-access-gdqbl\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.289431 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-utilities\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.289467 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.289772 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.789760651 +0000 UTC m=+144.992023423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.352871 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fd75n" podStartSLOduration=124.352829571 podStartE2EDuration="2m4.352829571s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:07.258522299 +0000 UTC m=+144.460785061" watchObservedRunningTime="2026-01-26 14:12:07.352829571 +0000 UTC m=+144.555092343" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.377906 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:07 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:07 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:07 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.377995 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.394438 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.394754 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdqbl\" (UniqueName: \"kubernetes.io/projected/18f19460-3c63-42ea-b891-10d9b8a36e2e-kube-api-access-gdqbl\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.394798 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-utilities\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.394853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-catalog-content\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.395912 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-catalog-content\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.396399 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.89637328 +0000 UTC m=+145.098636062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.397026 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-utilities\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.410362 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2tbdp"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.411667 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.424011 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.433667 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2tbdp"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.453956 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdqbl\" (UniqueName: \"kubernetes.io/projected/18f19460-3c63-42ea-b891-10d9b8a36e2e-kube-api-access-gdqbl\") pod \"certified-operators-l7xmz\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.495893 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.496217 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:07.996206005 +0000 UTC m=+145.198468777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.545410 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.562762 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" event={"ID":"93331b07-0680-4f0a-b2d6-e629aa6b207b","Type":"ContainerStarted","Data":"37f2b2fbb18a2e55783305ecda0481803d9b4544f4542fb12aa04afebdd9a68a"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.593147 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" event={"ID":"7398d291-531c-4828-b93d-81e238b3b6db","Type":"ContainerStarted","Data":"4eb801b359b5ee0f9f12f7f98ae2f0e8e0fe87aca42a756ea13acecb11d3df33"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.594276 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.601123 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.601282 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-utilities\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.601319 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-catalog-content\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.601351 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlsl\" (UniqueName: \"kubernetes.io/projected/eda39827-b747-4e2e-9c8c-5f699cdf4a96-kube-api-access-7jlsl\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.602431 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.102414721 +0000 UTC m=+145.304677493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.620119 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtbzf"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.621460 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.623936 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" event={"ID":"154530fd-6283-436e-8fa7-c7b891904e4d","Type":"ContainerStarted","Data":"8c046cd1345325418526b639cbdda5156af316b4852379614a0337f1d373f81b"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.623986 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" event={"ID":"154530fd-6283-436e-8fa7-c7b891904e4d","Type":"ContainerStarted","Data":"246480bdaae89525f0fc0000c1ab8f9287d6e591526bfad848fd4233acc8292d"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.661145 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtbzf"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.668111 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" podStartSLOduration=124.668088155 podStartE2EDuration="2m4.668088155s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:07.646686342 +0000 UTC m=+144.848949114" watchObservedRunningTime="2026-01-26 14:12:07.668088155 +0000 UTC m=+144.870350927" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.696837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" event={"ID":"3c992b2a-afc8-4b71-ae58-bb6748930ac9","Type":"ContainerStarted","Data":"a69ede5d2db599ece96cbdb7ea0efeb07d45ed270b6e241cfee98667853d5f3e"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.702716 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlsl\" (UniqueName: \"kubernetes.io/projected/eda39827-b747-4e2e-9c8c-5f699cdf4a96-kube-api-access-7jlsl\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.702767 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.702836 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-utilities\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.702858 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-catalog-content\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.703418 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-catalog-content\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.704296 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.204284032 +0000 UTC m=+145.406546804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.704518 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-utilities\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.724493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" event={"ID":"978b98ab-c044-4683-aabd-5d2b4d3cab70","Type":"ContainerStarted","Data":"c4032b79ffeda82f8435c546e1a65c86f9c340d3473b806b9f467432a1e1b2c8"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.756872 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jfrvv" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.766799 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" event={"ID":"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf","Type":"ContainerStarted","Data":"41ffaaee934b4c28e26e5894006ad6c15be69f4290fcf1066a90ba9c86356fbf"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.767395 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlsl\" (UniqueName: \"kubernetes.io/projected/eda39827-b747-4e2e-9c8c-5f699cdf4a96-kube-api-access-7jlsl\") pod \"community-operators-2tbdp\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.795860 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" event={"ID":"5ebc7443-38b3-490b-bb56-4fade81f4779","Type":"ContainerStarted","Data":"a5e8537ef6aa80ec8f7a2d3448855dd5d12fc6c22dc87acd61fc924dd7754e4c"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.806388 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.806563 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4m6j\" (UniqueName: \"kubernetes.io/projected/529fbc62-acac-4f76-92b5-2519ab246802-kube-api-access-x4m6j\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.806654 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-utilities\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.806778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-catalog-content\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.806929 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.306907075 +0000 UTC m=+145.509169847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.820899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" event={"ID":"6761c679-5216-4857-adda-6f032dffb3ad","Type":"ContainerStarted","Data":"ad61aa70eb2f63c9e089bd96fb714ae4bf1d20aac03e1638ef5b8ca19a3b943f"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.833056 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-29lxg" podStartSLOduration=124.833032916 podStartE2EDuration="2m4.833032916s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:07.795256209 +0000 UTC m=+144.997518981" watchObservedRunningTime="2026-01-26 14:12:07.833032916 +0000 UTC m=+145.035295698" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.859321 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" event={"ID":"ef694d4b-2d06-4c04-924e-698540e3692a","Type":"ContainerStarted","Data":"43465877f998a76f97e19aed8abbc37c376b15d28468a45f49018c6592a2e73d"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.871459 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9kr2"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.872633 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.898955 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9kr2"] Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.912012 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-utilities\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.912082 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.912135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-catalog-content\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.912193 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4m6j\" (UniqueName: \"kubernetes.io/projected/529fbc62-acac-4f76-92b5-2519ab246802-kube-api-access-x4m6j\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.913851 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-utilities\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: E0126 14:12:07.914137 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.414123384 +0000 UTC m=+145.616386156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.920427 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-catalog-content\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.920872 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" event={"ID":"b0cfa152-ddce-4531-9d48-07f06824d74b","Type":"ContainerStarted","Data":"b226d1241be19e3369fb5c9fe175237dd9ce351b81c34533cee8f7f20586e219"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.939906 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" event={"ID":"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60","Type":"ContainerStarted","Data":"d24efef5db12c3d3f35a5e848d26bba101571716f8373a1fe0914cdc9384f3f2"} Jan 26 14:12:07 crc kubenswrapper[4922]: I0126 14:12:07.939960 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" event={"ID":"5bcc7d8f-9f3a-4c7c-be64-d30160a1fc60","Type":"ContainerStarted","Data":"d894a8eca12a18baef0479426c45041bbf008153271175ca960a11dd5253b95a"} Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.008425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" event={"ID":"a24e8826-6ea2-4a96-84a8-5ae5931b5a9d","Type":"ContainerStarted","Data":"d8ae53d9fa79980f5fa60c62543d47e55794de6c5b63be8f9a3b1819b7935b88"} Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.020625 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.020921 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.520881307 +0000 UTC m=+145.723144079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.021728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-utilities\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.021761 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-catalog-content\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.021906 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbfl\" (UniqueName: \"kubernetes.io/projected/676089f7-e97f-40b6-94ca-77d491dbf2a5-kube-api-access-5pbfl\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.021996 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.029816 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.529798227 +0000 UTC m=+145.732060999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.044620 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" event={"ID":"cc1ae0ea-ace8-40a5-bc24-fc1363e898c1","Type":"ContainerStarted","Data":"0b6d01fa68e96fca11134f85a219c3d120eeeaf32050f58163fd4e9a16e50335"} Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.046412 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.052006 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" event={"ID":"538a74fd-fc9a-49f8-83cc-c33a83d15081","Type":"ContainerStarted","Data":"bbd46f2939937102fa04da55818c48317510df63c3e07634aa05e9fb44dd5165"} Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.077074 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4m6j\" (UniqueName: \"kubernetes.io/projected/529fbc62-acac-4f76-92b5-2519ab246802-kube-api-access-x4m6j\") pod \"certified-operators-vtbzf\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.122550 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.122755 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-utilities\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.122812 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-catalog-content\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.122911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pbfl\" (UniqueName: \"kubernetes.io/projected/676089f7-e97f-40b6-94ca-77d491dbf2a5-kube-api-access-5pbfl\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.123507 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrlv6" event={"ID":"d9829fed-e403-4cc4-b9be-88ca53421931","Type":"ContainerStarted","Data":"c756107c708993a186f90cda6262c154978d0c15b2d862eede3c1dd2c6cfb94d"} Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.123629 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.623611403 +0000 UTC m=+145.825874175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.124392 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-utilities\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.125693 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-catalog-content\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.126448 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.128015 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" event={"ID":"fc95b55f-f4f5-4c20-a7da-c5c3b49a8dab","Type":"ContainerStarted","Data":"fa973a7e99347843ee34f414156105ae7fc83bd1fe994216921645530400816c"} Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.131216 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" event={"ID":"3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f","Type":"ContainerStarted","Data":"26a71f1f9e853396ebcb1e133226b39f413a97ad974c0bb97fcdf2f1998376ed"} Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.135193 4922 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-26rnv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.135258 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" podUID="12e31154-e0cc-4aa6-802b-31590a683866" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.144672 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kzfdk" podStartSLOduration=125.144650425 podStartE2EDuration="2m5.144650425s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.019354269 +0000 UTC m=+145.221617041" watchObservedRunningTime="2026-01-26 14:12:08.144650425 +0000 UTC m=+145.346913197" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.195177 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pbfl\" (UniqueName: \"kubernetes.io/projected/676089f7-e97f-40b6-94ca-77d491dbf2a5-kube-api-access-5pbfl\") pod \"community-operators-f9kr2\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.215932 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8tp2" podStartSLOduration=125.215910913 podStartE2EDuration="2m5.215910913s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.215612554 +0000 UTC m=+145.417875336" watchObservedRunningTime="2026-01-26 14:12:08.215910913 +0000 UTC m=+145.418173685" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.225637 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.232876 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.732855225 +0000 UTC m=+145.935117997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.235897 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r8trh" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.275623 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.276110 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.306952 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5mdjx" podStartSLOduration=127.306920002 podStartE2EDuration="2m7.306920002s" podCreationTimestamp="2026-01-26 14:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.304901789 +0000 UTC m=+145.507164561" watchObservedRunningTime="2026-01-26 14:12:08.306920002 +0000 UTC m=+145.509182774" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.329056 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.330369 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.830351128 +0000 UTC m=+146.032613900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.379279 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:08 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:08 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:08 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.379353 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.430683 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.431219 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:08.931202046 +0000 UTC m=+146.133464828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.442499 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.478154 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zkl86" podStartSLOduration=126.47813182 podStartE2EDuration="2m6.47813182s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.394696439 +0000 UTC m=+145.596959211" watchObservedRunningTime="2026-01-26 14:12:08.47813182 +0000 UTC m=+145.680394582" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.479306 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7xmz"] Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.494499 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2pbkx" podStartSLOduration=125.494481114 podStartE2EDuration="2m5.494481114s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.492709338 +0000 UTC m=+145.694972110" watchObservedRunningTime="2026-01-26 14:12:08.494481114 +0000 UTC m=+145.696743886" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.531489 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fdzqj" podStartSLOduration=126.531465876 podStartE2EDuration="2m6.531465876s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.529513824 +0000 UTC m=+145.731776606" watchObservedRunningTime="2026-01-26 14:12:08.531465876 +0000 UTC m=+145.733728648" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.531651 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.534487 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.034446409 +0000 UTC m=+146.236709171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: W0126 14:12:08.551961 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f19460_3c63_42ea_b891_10d9b8a36e2e.slice/crio-b39c69d3e618f43118272112ee39e890ff44679d35471f5b5ce0b2925f85e5be WatchSource:0}: Error finding container b39c69d3e618f43118272112ee39e890ff44679d35471f5b5ce0b2925f85e5be: Status 404 returned error can't find the container with id b39c69d3e618f43118272112ee39e890ff44679d35471f5b5ce0b2925f85e5be Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.571141 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l6dpv" podStartSLOduration=125.571114211 podStartE2EDuration="2m5.571114211s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.55292118 +0000 UTC m=+145.755183952" watchObservedRunningTime="2026-01-26 14:12:08.571114211 +0000 UTC m=+145.773376983" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.632529 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jprp4" podStartSLOduration=125.632503799 podStartE2EDuration="2m5.632503799s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.607523915 +0000 UTC m=+145.809786687" watchObservedRunningTime="2026-01-26 14:12:08.632503799 +0000 UTC m=+145.834766571" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.634371 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" podStartSLOduration=126.634363268 podStartE2EDuration="2m6.634363268s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.632774348 +0000 UTC m=+145.835037120" watchObservedRunningTime="2026-01-26 14:12:08.634363268 +0000 UTC m=+145.836626030" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.638234 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.639608 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.139591082 +0000 UTC m=+146.341853854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.701205 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jhchn" podStartSLOduration=125.701182287 podStartE2EDuration="2m5.701182287s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.699984799 +0000 UTC m=+145.902247571" watchObservedRunningTime="2026-01-26 14:12:08.701182287 +0000 UTC m=+145.903445059" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.739099 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.739441 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.239420288 +0000 UTC m=+146.441683060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.829865 4922 csr.go:261] certificate signing request csr-68vrw is approved, waiting to be issued Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.832840 4922 csr.go:257] certificate signing request csr-68vrw is issued Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.846862 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.847280 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.347267736 +0000 UTC m=+146.549530508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.876028 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-s5rq6" podStartSLOduration=125.876005209 podStartE2EDuration="2m5.876005209s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.873454688 +0000 UTC m=+146.075717470" watchObservedRunningTime="2026-01-26 14:12:08.876005209 +0000 UTC m=+146.078267981" Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.950290 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:08 crc kubenswrapper[4922]: E0126 14:12:08.950651 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.450606702 +0000 UTC m=+146.652869474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:08 crc kubenswrapper[4922]: I0126 14:12:08.991250 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qrlv6" podStartSLOduration=10.991226358 podStartE2EDuration="10.991226358s" podCreationTimestamp="2026-01-26 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:08.937831991 +0000 UTC m=+146.140094763" watchObservedRunningTime="2026-01-26 14:12:08.991226358 +0000 UTC m=+146.193489140" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.011892 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2tbdp"] Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.053008 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.053529 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.553512534 +0000 UTC m=+146.755775306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.075960 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9kr2"] Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.149691 4922 generic.go:334] "Generic (PLEG): container finished" podID="538a74fd-fc9a-49f8-83cc-c33a83d15081" containerID="bbd46f2939937102fa04da55818c48317510df63c3e07634aa05e9fb44dd5165" exitCode=0 Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.149753 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" event={"ID":"538a74fd-fc9a-49f8-83cc-c33a83d15081","Type":"ContainerDied","Data":"bbd46f2939937102fa04da55818c48317510df63c3e07634aa05e9fb44dd5165"} Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.151643 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tbdp" event={"ID":"eda39827-b747-4e2e-9c8c-5f699cdf4a96","Type":"ContainerStarted","Data":"e9ee5aeff376458f4e744275a44c95fbd165f0cbd1645d8d43d4038f1e7bbb26"} Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.154146 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.154548 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.654535147 +0000 UTC m=+146.856797919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.164461 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" event={"ID":"ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf","Type":"ContainerStarted","Data":"03297a7481abf46b1888008b390862f02de1f33f621dbff9b4bfbaf7a4a018f7"} Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.180219 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtbzf"] Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.183816 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" event={"ID":"93331b07-0680-4f0a-b2d6-e629aa6b207b","Type":"ContainerStarted","Data":"b8ee18a1102b1615272559fe891cd747823910439bc1054f4ae53224edd2341d"} Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.186185 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerStarted","Data":"b39c69d3e618f43118272112ee39e890ff44679d35471f5b5ce0b2925f85e5be"} Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.194349 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerStarted","Data":"efe549a13da8fc63c0dd8d70ee68a56f03b686c15a0a2a65bc77285eedb4ce33"} Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.210671 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" podStartSLOduration=127.21063615 podStartE2EDuration="2m7.21063615s" podCreationTimestamp="2026-01-26 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:09.20933942 +0000 UTC m=+146.411602212" watchObservedRunningTime="2026-01-26 14:12:09.21063615 +0000 UTC m=+146.412898922" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.221045 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.257150 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.257798 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.757779541 +0000 UTC m=+146.960042313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.358421 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.358593 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.858564727 +0000 UTC m=+147.060827499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.358885 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.362147 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:09 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:09 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:09 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.362218 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.364676 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.864654638 +0000 UTC m=+147.066917410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.460653 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.461262 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:09.961243723 +0000 UTC m=+147.163506495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.563123 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.563495 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.063480894 +0000 UTC m=+147.265743666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.602143 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5cvbq"] Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.603347 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.610086 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.614434 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cvbq"] Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.664707 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.665517 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.165500089 +0000 UTC m=+147.367762861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.677129 4922 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.766820 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-catalog-content\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.766899 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-utilities\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.766936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.767111 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87jz8\" (UniqueName: \"kubernetes.io/projected/95a340dd-cf35-496b-aae2-9190b1b24d2b-kube-api-access-87jz8\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.767494 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.267467592 +0000 UTC m=+147.469730564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.833834 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 14:07:08 +0000 UTC, rotation deadline is 2026-11-05 05:06:56.654460573 +0000 UTC Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.833882 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6782h54m46.820580675s for next certificate rotation Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.868788 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.868948 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.368927829 +0000 UTC m=+147.571190601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.868977 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-utilities\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.869021 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.869082 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87jz8\" (UniqueName: \"kubernetes.io/projected/95a340dd-cf35-496b-aae2-9190b1b24d2b-kube-api-access-87jz8\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.869146 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-catalog-content\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.869422 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.369412595 +0000 UTC m=+147.571675367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.869652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-utilities\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.869731 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-catalog-content\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.893624 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87jz8\" (UniqueName: \"kubernetes.io/projected/95a340dd-cf35-496b-aae2-9190b1b24d2b-kube-api-access-87jz8\") pod \"redhat-marketplace-5cvbq\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.969969 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.970234 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.470204641 +0000 UTC m=+147.672467413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.970418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:09 crc kubenswrapper[4922]: E0126 14:12:09.971272 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.471262224 +0000 UTC m=+147.673524996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.983661 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:12:09 crc kubenswrapper[4922]: I0126 14:12:09.999933 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lrldq"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.001427 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.015628 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrldq"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.071700 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.071958 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.571917536 +0000 UTC m=+147.774180308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.072113 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.072171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.072599 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.572576176 +0000 UTC m=+147.774838948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.076158 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.173090 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.173341 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.67330141 +0000 UTC m=+147.875564172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.174621 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ftm\" (UniqueName: \"kubernetes.io/projected/9483648b-7a48-480a-8097-5e08962e36ce-kube-api-access-78ftm\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.174692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.174749 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.174805 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.174854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-utilities\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.174991 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.175084 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-catalog-content\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.176823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.178890 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.678874995 +0000 UTC m=+147.881137767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.181690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.183935 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.202145 4922 generic.go:334] "Generic (PLEG): container finished" podID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerID="505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331" exitCode=0 Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.202234 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerDied","Data":"505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.206637 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.209590 4922 generic.go:334] "Generic (PLEG): container finished" podID="529fbc62-acac-4f76-92b5-2519ab246802" containerID="56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a" exitCode=0 Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.209721 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbzf" event={"ID":"529fbc62-acac-4f76-92b5-2519ab246802","Type":"ContainerDied","Data":"56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.209756 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbzf" event={"ID":"529fbc62-acac-4f76-92b5-2519ab246802","Type":"ContainerStarted","Data":"5b500d51802be5f6af027cda0a076613704e117ea427973ce78217eec870812c"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.214649 4922 generic.go:334] "Generic (PLEG): container finished" podID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerID="83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23" exitCode=0 Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.214737 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tbdp" event={"ID":"eda39827-b747-4e2e-9c8c-5f699cdf4a96","Type":"ContainerDied","Data":"83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.217800 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" event={"ID":"93331b07-0680-4f0a-b2d6-e629aa6b207b","Type":"ContainerStarted","Data":"34999c3d859219d8a87c61c536555162784825612b3566fc48421337c3fddfc6"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.217890 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" event={"ID":"93331b07-0680-4f0a-b2d6-e629aa6b207b","Type":"ContainerStarted","Data":"2c13aa71b93299bf65cfea221144bcbaccc1f62cf3fa408bb896dbf0ba5770f3"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.220522 4922 generic.go:334] "Generic (PLEG): container finished" podID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerID="7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a" exitCode=0 Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.225379 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerDied","Data":"7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a"} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.250211 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cvbq"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.277693 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.277886 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.777841014 +0000 UTC m=+147.980103786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.278533 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-catalog-content\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.278636 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78ftm\" (UniqueName: \"kubernetes.io/projected/9483648b-7a48-480a-8097-5e08962e36ce-kube-api-access-78ftm\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.278727 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-utilities\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.278831 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.279274 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-catalog-content\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.279605 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-utilities\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.286447 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:12:10.786278289 +0000 UTC m=+147.988541061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dst2r" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.300225 4922 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T14:12:09.677149655Z","Handler":null,"Name":""} Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.302177 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78ftm\" (UniqueName: \"kubernetes.io/projected/9483648b-7a48-480a-8097-5e08962e36ce-kube-api-access-78ftm\") pod \"redhat-marketplace-lrldq\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.305697 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-vglvd" podStartSLOduration=12.305681468 podStartE2EDuration="12.305681468s" podCreationTimestamp="2026-01-26 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:10.302015613 +0000 UTC m=+147.504278395" watchObservedRunningTime="2026-01-26 14:12:10.305681468 +0000 UTC m=+147.507944240" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.312448 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.323520 4922 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.323580 4922 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.324589 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.331476 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.349008 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.360425 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:10 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:10 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:10 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.360497 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.380858 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.396026 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tj7hb"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.397888 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.410738 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.414573 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.416718 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tj7hb"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.477324 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.484833 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.491859 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.491897 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.549444 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dst2r\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.555306 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.587412 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538a74fd-fc9a-49f8-83cc-c33a83d15081-secret-volume\") pod \"538a74fd-fc9a-49f8-83cc-c33a83d15081\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.587660 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume\") pod \"538a74fd-fc9a-49f8-83cc-c33a83d15081\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.587702 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzd67\" (UniqueName: \"kubernetes.io/projected/538a74fd-fc9a-49f8-83cc-c33a83d15081-kube-api-access-dzd67\") pod \"538a74fd-fc9a-49f8-83cc-c33a83d15081\" (UID: \"538a74fd-fc9a-49f8-83cc-c33a83d15081\") " Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.587904 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42w6v\" (UniqueName: \"kubernetes.io/projected/cfcca17c-5b8e-42fa-8fa2-56139592b85b-kube-api-access-42w6v\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.587963 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-utilities\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.587985 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-catalog-content\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.588540 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume" (OuterVolumeSpecName: "config-volume") pod "538a74fd-fc9a-49f8-83cc-c33a83d15081" (UID: "538a74fd-fc9a-49f8-83cc-c33a83d15081"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.604840 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cchdg"] Jan 26 14:12:10 crc kubenswrapper[4922]: E0126 14:12:10.605147 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538a74fd-fc9a-49f8-83cc-c33a83d15081" containerName="collect-profiles" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.605165 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="538a74fd-fc9a-49f8-83cc-c33a83d15081" containerName="collect-profiles" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.605271 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="538a74fd-fc9a-49f8-83cc-c33a83d15081" containerName="collect-profiles" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.605978 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538a74fd-fc9a-49f8-83cc-c33a83d15081-kube-api-access-dzd67" (OuterVolumeSpecName: "kube-api-access-dzd67") pod "538a74fd-fc9a-49f8-83cc-c33a83d15081" (UID: "538a74fd-fc9a-49f8-83cc-c33a83d15081"). InnerVolumeSpecName "kube-api-access-dzd67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.606178 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.606390 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538a74fd-fc9a-49f8-83cc-c33a83d15081-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "538a74fd-fc9a-49f8-83cc-c33a83d15081" (UID: "538a74fd-fc9a-49f8-83cc-c33a83d15081"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.607723 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cchdg"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.642100 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.642401 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.652294 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.688936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42w6v\" (UniqueName: \"kubernetes.io/projected/cfcca17c-5b8e-42fa-8fa2-56139592b85b-kube-api-access-42w6v\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689025 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-utilities\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689055 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-catalog-content\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689129 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/538a74fd-fc9a-49f8-83cc-c33a83d15081-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689141 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538a74fd-fc9a-49f8-83cc-c33a83d15081-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689151 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzd67\" (UniqueName: \"kubernetes.io/projected/538a74fd-fc9a-49f8-83cc-c33a83d15081-kube-api-access-dzd67\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-catalog-content\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.689804 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-utilities\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.708836 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42w6v\" (UniqueName: \"kubernetes.io/projected/cfcca17c-5b8e-42fa-8fa2-56139592b85b-kube-api-access-42w6v\") pod \"redhat-operators-tj7hb\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.728026 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.789983 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-utilities\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.790146 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghkw\" (UniqueName: \"kubernetes.io/projected/64870ab4-a50f-4e29-af84-3b2f63f16180-kube-api-access-tghkw\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.790178 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-catalog-content\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.892957 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghkw\" (UniqueName: \"kubernetes.io/projected/64870ab4-a50f-4e29-af84-3b2f63f16180-kube-api-access-tghkw\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.893010 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-catalog-content\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.893076 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-utilities\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.893568 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-utilities\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.894129 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-catalog-content\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.907679 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-rxv7b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.907753 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rxv7b" podUID="ded42282-8aa9-4480-923f-87fa83ed5e7e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.907863 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-rxv7b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.907941 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rxv7b" podUID="ded42282-8aa9-4480-923f-87fa83ed5e7e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.929097 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghkw\" (UniqueName: \"kubernetes.io/projected/64870ab4-a50f-4e29-af84-3b2f63f16180-kube-api-access-tghkw\") pod \"redhat-operators-cchdg\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.934449 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrldq"] Jan 26 14:12:10 crc kubenswrapper[4922]: I0126 14:12:10.943874 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:12:10 crc kubenswrapper[4922]: W0126 14:12:10.983614 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-230b03f977bdd69c9d81c5bc70247a1e4ab78ba650c20e801a73eab20237beec WatchSource:0}: Error finding container 230b03f977bdd69c9d81c5bc70247a1e4ab78ba650c20e801a73eab20237beec: Status 404 returned error can't find the container with id 230b03f977bdd69c9d81c5bc70247a1e4ab78ba650c20e801a73eab20237beec Jan 26 14:12:11 crc kubenswrapper[4922]: W0126 14:12:11.025623 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1b50bd82f690c2ad4d093ed7fcaef4574eaee0c29398b1551588e4632249ba16 WatchSource:0}: Error finding container 1b50bd82f690c2ad4d093ed7fcaef4574eaee0c29398b1551588e4632249ba16: Status 404 returned error can't find the container with id 1b50bd82f690c2ad4d093ed7fcaef4574eaee0c29398b1551588e4632249ba16 Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.082406 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dst2r"] Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.122095 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 14:12:11 crc kubenswrapper[4922]: W0126 14:12:11.216423 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfcca17c_5b8e_42fa_8fa2_56139592b85b.slice/crio-637bd2c30f01bafed5a95e0d8c0d3a965c3955af72e68e465a842a5fec773d0a WatchSource:0}: Error finding container 637bd2c30f01bafed5a95e0d8c0d3a965c3955af72e68e465a842a5fec773d0a: Status 404 returned error can't find the container with id 637bd2c30f01bafed5a95e0d8c0d3a965c3955af72e68e465a842a5fec773d0a Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.225514 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tj7hb"] Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.248997 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" event={"ID":"538a74fd-fc9a-49f8-83cc-c33a83d15081","Type":"ContainerDied","Data":"6aa64f76217f806bc2877ba325973e8402e9025b291e466e580bf8dc15fb863a"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.249058 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa64f76217f806bc2877ba325973e8402e9025b291e466e580bf8dc15fb863a" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.249237 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.250415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerStarted","Data":"8ea08fc19c2a6db69ec69c61268ba2aa72dbe4ad810efe3f2b33fe89da5149b9"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.276759 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cchdg"] Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.282835 4922 generic.go:334] "Generic (PLEG): container finished" podID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerID="ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387" exitCode=0 Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.282930 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cvbq" event={"ID":"95a340dd-cf35-496b-aae2-9190b1b24d2b","Type":"ContainerDied","Data":"ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.282969 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cvbq" event={"ID":"95a340dd-cf35-496b-aae2-9190b1b24d2b","Type":"ContainerStarted","Data":"5caf5ca5a54d1710abec09c60d52739aa821955ba85ce740bad4bdb3252f3ffe"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.292050 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tj7hb" event={"ID":"cfcca17c-5b8e-42fa-8fa2-56139592b85b","Type":"ContainerStarted","Data":"637bd2c30f01bafed5a95e0d8c0d3a965c3955af72e68e465a842a5fec773d0a"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.302105 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.302514 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.307646 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.307707 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.314116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" event={"ID":"49958f99-8b05-4ebb-9eb6-396020c374eb","Type":"ContainerStarted","Data":"d620a1a67708cf675f5db9ca6196200fcb894ef7bc7246a3d81ed6c319be391d"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.315523 4922 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9tg4w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]log ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]etcd ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/max-in-flight-filter ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 14:12:11 crc kubenswrapper[4922]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/openshift.io-startinformers ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 14:12:11 crc kubenswrapper[4922]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 14:12:11 crc kubenswrapper[4922]: livez check failed Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.315594 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" podUID="ba8a8c28-cfe8-42ae-bbcb-4c2b674f61bf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.327246 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.327301 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.328921 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1b50bd82f690c2ad4d093ed7fcaef4574eaee0c29398b1551588e4632249ba16"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.332524 4922 patch_prober.go:28] interesting pod/console-f9d7485db-fd75n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.332580 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fd75n" podUID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.334137 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ad22ec26dee3363755de0116ff933a0090ac5bfbc03cbe71d93095a2f75495c2"} Jan 26 14:12:11 crc kubenswrapper[4922]: W0126 14:12:11.338640 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64870ab4_a50f_4e29_af84_3b2f63f16180.slice/crio-735dfc5004d2cbce4fbc05b181a4b158af2abf197d8fc543aae0ea78ba5bdaf9 WatchSource:0}: Error finding container 735dfc5004d2cbce4fbc05b181a4b158af2abf197d8fc543aae0ea78ba5bdaf9: Status 404 returned error can't find the container with id 735dfc5004d2cbce4fbc05b181a4b158af2abf197d8fc543aae0ea78ba5bdaf9 Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.349765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"230b03f977bdd69c9d81c5bc70247a1e4ab78ba650c20e801a73eab20237beec"} Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.355407 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.357450 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q5z2b" Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.359691 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:11 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:11 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:11 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:11 crc kubenswrapper[4922]: I0126 14:12:11.359726 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.360009 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cffd5cfd03a7ecff740b5ecb937ab2daadcd4dadcd1625b767acf975b3860aeb"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.360854 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.363228 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:12 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:12 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:12 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.363269 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.369852 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b9655cd53f8a8e9cbd26e47db19edae7fa8dd291c7cac2b149d07d754134ee7b"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.382975 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerDied","Data":"9f915094cf4d63253edbfe63c49e8f453a33957927002f16575046bc8199e9b0"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.382558 4922 generic.go:334] "Generic (PLEG): container finished" podID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerID="9f915094cf4d63253edbfe63c49e8f453a33957927002f16575046bc8199e9b0" exitCode=0 Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.384077 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerStarted","Data":"735dfc5004d2cbce4fbc05b181a4b158af2abf197d8fc543aae0ea78ba5bdaf9"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.458105 4922 generic.go:334] "Generic (PLEG): container finished" podID="9483648b-7a48-480a-8097-5e08962e36ce" containerID="976a6e05d0e4886566ff997f35d8f306ed356b5853e642f6590d262693ffdb68" exitCode=0 Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.458244 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerDied","Data":"976a6e05d0e4886566ff997f35d8f306ed356b5853e642f6590d262693ffdb68"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.475428 4922 generic.go:334] "Generic (PLEG): container finished" podID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerID="90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0" exitCode=0 Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.475716 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tj7hb" event={"ID":"cfcca17c-5b8e-42fa-8fa2-56139592b85b","Type":"ContainerDied","Data":"90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.491432 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" event={"ID":"49958f99-8b05-4ebb-9eb6-396020c374eb","Type":"ContainerStarted","Data":"dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.492076 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.500037 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5368436993a5404f8d6816738656c8973f8c53e5e629bed1a0eff31645896110"} Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.539490 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" podStartSLOduration=129.539467168 podStartE2EDuration="2m9.539467168s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:12.525322334 +0000 UTC m=+149.727585106" watchObservedRunningTime="2026-01-26 14:12:12.539467168 +0000 UTC m=+149.741729940" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.585213 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.586083 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.587740 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.588333 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.597481 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.728156 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eb77bb2-c096-49cc-94de-1d9c00134803-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.728325 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eb77bb2-c096-49cc-94de-1d9c00134803-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.830167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eb77bb2-c096-49cc-94de-1d9c00134803-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.830273 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eb77bb2-c096-49cc-94de-1d9c00134803-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.830351 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eb77bb2-c096-49cc-94de-1d9c00134803-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.870869 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eb77bb2-c096-49cc-94de-1d9c00134803-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:12 crc kubenswrapper[4922]: I0126 14:12:12.912418 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:13 crc kubenswrapper[4922]: I0126 14:12:13.358455 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:13 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:13 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:13 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:13 crc kubenswrapper[4922]: I0126 14:12:13.359119 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:13 crc kubenswrapper[4922]: I0126 14:12:13.453037 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 14:12:13 crc kubenswrapper[4922]: W0126 14:12:13.473442 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5eb77bb2_c096_49cc_94de_1d9c00134803.slice/crio-119665fd04cfd3c399856cb69bc997ef0d8eb4783a1744d69d402b0f0186eac8 WatchSource:0}: Error finding container 119665fd04cfd3c399856cb69bc997ef0d8eb4783a1744d69d402b0f0186eac8: Status 404 returned error can't find the container with id 119665fd04cfd3c399856cb69bc997ef0d8eb4783a1744d69d402b0f0186eac8 Jan 26 14:12:13 crc kubenswrapper[4922]: I0126 14:12:13.515096 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eb77bb2-c096-49cc-94de-1d9c00134803","Type":"ContainerStarted","Data":"119665fd04cfd3c399856cb69bc997ef0d8eb4783a1744d69d402b0f0186eac8"} Jan 26 14:12:14 crc kubenswrapper[4922]: I0126 14:12:14.359032 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:14 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:14 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:14 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:14 crc kubenswrapper[4922]: I0126 14:12:14.360309 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:14 crc kubenswrapper[4922]: I0126 14:12:14.529776 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eb77bb2-c096-49cc-94de-1d9c00134803","Type":"ContainerStarted","Data":"c499f4d1c334649d8f599760a3700afbc103df5cb3dd49d5dd99f5e9f9dc5cb3"} Jan 26 14:12:14 crc kubenswrapper[4922]: I0126 14:12:14.550415 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.550213091 podStartE2EDuration="2.550213091s" podCreationTimestamp="2026-01-26 14:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:14.546114672 +0000 UTC m=+151.748377444" watchObservedRunningTime="2026-01-26 14:12:14.550213091 +0000 UTC m=+151.752475863" Jan 26 14:12:14 crc kubenswrapper[4922]: E0126 14:12:14.552160 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5eb77bb2_c096_49cc_94de_1d9c00134803.slice/crio-c499f4d1c334649d8f599760a3700afbc103df5cb3dd49d5dd99f5e9f9dc5cb3.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.050310 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.052294 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.054236 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.058203 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.058562 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.192207 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee8485a6-67c1-464a-912c-e92667570d79-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.192336 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee8485a6-67c1-464a-912c-e92667570d79-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.293769 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee8485a6-67c1-464a-912c-e92667570d79-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.293899 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee8485a6-67c1-464a-912c-e92667570d79-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.294218 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee8485a6-67c1-464a-912c-e92667570d79-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.330209 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee8485a6-67c1-464a-912c-e92667570d79-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.360743 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:15 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:15 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:15 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.360890 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.386614 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.550715 4922 generic.go:334] "Generic (PLEG): container finished" podID="5eb77bb2-c096-49cc-94de-1d9c00134803" containerID="c499f4d1c334649d8f599760a3700afbc103df5cb3dd49d5dd99f5e9f9dc5cb3" exitCode=0 Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.550765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eb77bb2-c096-49cc-94de-1d9c00134803","Type":"ContainerDied","Data":"c499f4d1c334649d8f599760a3700afbc103df5cb3dd49d5dd99f5e9f9dc5cb3"} Jan 26 14:12:15 crc kubenswrapper[4922]: I0126 14:12:15.800776 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 14:12:15 crc kubenswrapper[4922]: W0126 14:12:15.811356 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podee8485a6_67c1_464a_912c_e92667570d79.slice/crio-b1f888e6a501c0357f8331664939fa3f1bda2ded21831f6849882791f55293fa WatchSource:0}: Error finding container b1f888e6a501c0357f8331664939fa3f1bda2ded21831f6849882791f55293fa: Status 404 returned error can't find the container with id b1f888e6a501c0357f8331664939fa3f1bda2ded21831f6849882791f55293fa Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.312347 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.318026 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9tg4w" Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.357955 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:16 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:16 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:16 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.358029 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.530578 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qrlv6" Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.575093 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee8485a6-67c1-464a-912c-e92667570d79","Type":"ContainerStarted","Data":"b1f888e6a501c0357f8331664939fa3f1bda2ded21831f6849882791f55293fa"} Jan 26 14:12:16 crc kubenswrapper[4922]: I0126 14:12:16.915610 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.029789 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eb77bb2-c096-49cc-94de-1d9c00134803-kubelet-dir\") pod \"5eb77bb2-c096-49cc-94de-1d9c00134803\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.029953 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eb77bb2-c096-49cc-94de-1d9c00134803-kube-api-access\") pod \"5eb77bb2-c096-49cc-94de-1d9c00134803\" (UID: \"5eb77bb2-c096-49cc-94de-1d9c00134803\") " Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.030183 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eb77bb2-c096-49cc-94de-1d9c00134803-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5eb77bb2-c096-49cc-94de-1d9c00134803" (UID: "5eb77bb2-c096-49cc-94de-1d9c00134803"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.030542 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eb77bb2-c096-49cc-94de-1d9c00134803-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.073180 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb77bb2-c096-49cc-94de-1d9c00134803-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5eb77bb2-c096-49cc-94de-1d9c00134803" (UID: "5eb77bb2-c096-49cc-94de-1d9c00134803"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.131950 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eb77bb2-c096-49cc-94de-1d9c00134803-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.358129 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:17 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:17 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:17 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.358215 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.631933 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eb77bb2-c096-49cc-94de-1d9c00134803","Type":"ContainerDied","Data":"119665fd04cfd3c399856cb69bc997ef0d8eb4783a1744d69d402b0f0186eac8"} Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.631999 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119665fd04cfd3c399856cb69bc997ef0d8eb4783a1744d69d402b0f0186eac8" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.632145 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.645149 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee8485a6-67c1-464a-912c-e92667570d79","Type":"ContainerStarted","Data":"6f2460b80d89d33e690ab03872323d8693afded8488e5dcc655349cd861e8cf0"} Jan 26 14:12:17 crc kubenswrapper[4922]: I0126 14:12:17.680053 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.680027367 podStartE2EDuration="2.680027367s" podCreationTimestamp="2026-01-26 14:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:17.677707664 +0000 UTC m=+154.879970436" watchObservedRunningTime="2026-01-26 14:12:17.680027367 +0000 UTC m=+154.882290139" Jan 26 14:12:18 crc kubenswrapper[4922]: I0126 14:12:18.357831 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:18 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:18 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:18 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:18 crc kubenswrapper[4922]: I0126 14:12:18.357907 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:18 crc kubenswrapper[4922]: I0126 14:12:18.661388 4922 generic.go:334] "Generic (PLEG): container finished" podID="ee8485a6-67c1-464a-912c-e92667570d79" containerID="6f2460b80d89d33e690ab03872323d8693afded8488e5dcc655349cd861e8cf0" exitCode=0 Jan 26 14:12:18 crc kubenswrapper[4922]: I0126 14:12:18.661448 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee8485a6-67c1-464a-912c-e92667570d79","Type":"ContainerDied","Data":"6f2460b80d89d33e690ab03872323d8693afded8488e5dcc655349cd861e8cf0"} Jan 26 14:12:19 crc kubenswrapper[4922]: I0126 14:12:19.466043 4922 patch_prober.go:28] interesting pod/router-default-5444994796-p2r7g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:12:19 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Jan 26 14:12:19 crc kubenswrapper[4922]: [+]process-running ok Jan 26 14:12:19 crc kubenswrapper[4922]: healthz check failed Jan 26 14:12:19 crc kubenswrapper[4922]: I0126 14:12:19.466128 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p2r7g" podUID="68e6d11a-9d45-42a9-a366-ee3485704024" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:12:20 crc kubenswrapper[4922]: I0126 14:12:20.357843 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:12:20 crc kubenswrapper[4922]: I0126 14:12:20.359946 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-p2r7g" Jan 26 14:12:20 crc kubenswrapper[4922]: I0126 14:12:20.931240 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rxv7b" Jan 26 14:12:21 crc kubenswrapper[4922]: I0126 14:12:21.467633 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:12:21 crc kubenswrapper[4922]: I0126 14:12:21.471920 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:12:25 crc kubenswrapper[4922]: I0126 14:12:25.270651 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:12:25 crc kubenswrapper[4922]: I0126 14:12:25.290754 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/756187f6-68ea-4408-8d07-f691e16b4484-metrics-certs\") pod \"network-metrics-daemon-pzxnt\" (UID: \"756187f6-68ea-4408-8d07-f691e16b4484\") " pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:12:25 crc kubenswrapper[4922]: I0126 14:12:25.413724 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pzxnt" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.463980 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.594791 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee8485a6-67c1-464a-912c-e92667570d79-kube-api-access\") pod \"ee8485a6-67c1-464a-912c-e92667570d79\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.594969 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee8485a6-67c1-464a-912c-e92667570d79-kubelet-dir\") pod \"ee8485a6-67c1-464a-912c-e92667570d79\" (UID: \"ee8485a6-67c1-464a-912c-e92667570d79\") " Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.595240 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee8485a6-67c1-464a-912c-e92667570d79-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee8485a6-67c1-464a-912c-e92667570d79" (UID: "ee8485a6-67c1-464a-912c-e92667570d79"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.599610 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8485a6-67c1-464a-912c-e92667570d79-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee8485a6-67c1-464a-912c-e92667570d79" (UID: "ee8485a6-67c1-464a-912c-e92667570d79"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.696637 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee8485a6-67c1-464a-912c-e92667570d79-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.696690 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee8485a6-67c1-464a-912c-e92667570d79-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.750603 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee8485a6-67c1-464a-912c-e92667570d79","Type":"ContainerDied","Data":"b1f888e6a501c0357f8331664939fa3f1bda2ded21831f6849882791f55293fa"} Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.750718 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1f888e6a501c0357f8331664939fa3f1bda2ded21831f6849882791f55293fa" Jan 26 14:12:26 crc kubenswrapper[4922]: I0126 14:12:26.750816 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:12:30 crc kubenswrapper[4922]: I0126 14:12:30.580743 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:12:41 crc kubenswrapper[4922]: I0126 14:12:41.251994 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vq57m" Jan 26 14:12:41 crc kubenswrapper[4922]: I0126 14:12:41.307599 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:12:41 crc kubenswrapper[4922]: I0126 14:12:41.307689 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:12:48 crc kubenswrapper[4922]: E0126 14:12:48.976435 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 14:12:48 crc kubenswrapper[4922]: E0126 14:12:48.978512 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42w6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tj7hb_openshift-marketplace(cfcca17c-5b8e-42fa-8fa2-56139592b85b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:48 crc kubenswrapper[4922]: E0126 14:12:48.979820 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tj7hb" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" Jan 26 14:12:50 crc kubenswrapper[4922]: E0126 14:12:50.264551 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tj7hb" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" Jan 26 14:12:50 crc kubenswrapper[4922]: I0126 14:12:50.337112 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:12:50 crc kubenswrapper[4922]: E0126 14:12:50.392274 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 14:12:50 crc kubenswrapper[4922]: E0126 14:12:50.392563 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tghkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cchdg_openshift-marketplace(64870ab4-a50f-4e29-af84-3b2f63f16180): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:50 crc kubenswrapper[4922]: E0126 14:12:50.393909 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cchdg" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.844626 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 14:12:51 crc kubenswrapper[4922]: E0126 14:12:51.845431 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8485a6-67c1-464a-912c-e92667570d79" containerName="pruner" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.845449 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8485a6-67c1-464a-912c-e92667570d79" containerName="pruner" Jan 26 14:12:51 crc kubenswrapper[4922]: E0126 14:12:51.845461 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb77bb2-c096-49cc-94de-1d9c00134803" containerName="pruner" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.845468 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb77bb2-c096-49cc-94de-1d9c00134803" containerName="pruner" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.845612 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8485a6-67c1-464a-912c-e92667570d79" containerName="pruner" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.845630 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb77bb2-c096-49cc-94de-1d9c00134803" containerName="pruner" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.846175 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.849233 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.850802 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.855204 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 14:12:51 crc kubenswrapper[4922]: E0126 14:12:51.925405 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cchdg" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.933103 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:51 crc kubenswrapper[4922]: I0126 14:12:51.933173 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:52 crc kubenswrapper[4922]: E0126 14:12:52.003118 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 14:12:52 crc kubenswrapper[4922]: E0126 14:12:52.003713 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vtbzf_openshift-marketplace(529fbc62-acac-4f76-92b5-2519ab246802): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:52 crc kubenswrapper[4922]: E0126 14:12:52.004936 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vtbzf" podUID="529fbc62-acac-4f76-92b5-2519ab246802" Jan 26 14:12:52 crc kubenswrapper[4922]: I0126 14:12:52.034349 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:52 crc kubenswrapper[4922]: I0126 14:12:52.034418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:52 crc kubenswrapper[4922]: I0126 14:12:52.034826 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:52 crc kubenswrapper[4922]: I0126 14:12:52.081337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:52 crc kubenswrapper[4922]: I0126 14:12:52.176224 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.377557 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vtbzf" podUID="529fbc62-acac-4f76-92b5-2519ab246802" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.469009 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.469229 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78ftm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lrldq_openshift-marketplace(9483648b-7a48-480a-8097-5e08962e36ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.471345 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lrldq" podUID="9483648b-7a48-480a-8097-5e08962e36ce" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.501469 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.506570 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87jz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5cvbq_openshift-marketplace(95a340dd-cf35-496b-aae2-9190b1b24d2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.508250 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5cvbq" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.526489 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.526662 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pbfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f9kr2_openshift-marketplace(676089f7-e97f-40b6-94ca-77d491dbf2a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.528018 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f9kr2" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.540244 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.540395 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdqbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-l7xmz_openshift-marketplace(18f19460-3c63-42ea-b891-10d9b8a36e2e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.541593 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-l7xmz" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" Jan 26 14:12:53 crc kubenswrapper[4922]: I0126 14:12:53.834547 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 14:12:53 crc kubenswrapper[4922]: I0126 14:12:53.880861 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pzxnt"] Jan 26 14:12:53 crc kubenswrapper[4922]: W0126 14:12:53.920829 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod756187f6_68ea_4408_8d07_f691e16b4484.slice/crio-3635b157ca4cce9c8fa208f27a11911829e64ce0cfb3bef18ff6bbb719d0dc4f WatchSource:0}: Error finding container 3635b157ca4cce9c8fa208f27a11911829e64ce0cfb3bef18ff6bbb719d0dc4f: Status 404 returned error can't find the container with id 3635b157ca4cce9c8fa208f27a11911829e64ce0cfb3bef18ff6bbb719d0dc4f Jan 26 14:12:53 crc kubenswrapper[4922]: I0126 14:12:53.927387 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" event={"ID":"756187f6-68ea-4408-8d07-f691e16b4484","Type":"ContainerStarted","Data":"3635b157ca4cce9c8fa208f27a11911829e64ce0cfb3bef18ff6bbb719d0dc4f"} Jan 26 14:12:53 crc kubenswrapper[4922]: I0126 14:12:53.930736 4922 generic.go:334] "Generic (PLEG): container finished" podID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerID="14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a" exitCode=0 Jan 26 14:12:53 crc kubenswrapper[4922]: I0126 14:12:53.930794 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tbdp" event={"ID":"eda39827-b747-4e2e-9c8c-5f699cdf4a96","Type":"ContainerDied","Data":"14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a"} Jan 26 14:12:53 crc kubenswrapper[4922]: I0126 14:12:53.934972 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336","Type":"ContainerStarted","Data":"d7c227b1be5b386f2f4bb6d1a67db975ea74aa0b8aa13d0ee2c2835626387922"} Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.942652 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5cvbq" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.942734 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-l7xmz" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.943114 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrldq" podUID="9483648b-7a48-480a-8097-5e08962e36ce" Jan 26 14:12:53 crc kubenswrapper[4922]: E0126 14:12:53.944496 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f9kr2" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" Jan 26 14:12:54 crc kubenswrapper[4922]: I0126 14:12:54.941865 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" event={"ID":"756187f6-68ea-4408-8d07-f691e16b4484","Type":"ContainerStarted","Data":"65ad72bd2f47df770b1e54d6f440b4bb290bd06ddfde5939a8050c0371980d95"} Jan 26 14:12:54 crc kubenswrapper[4922]: I0126 14:12:54.942488 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pzxnt" event={"ID":"756187f6-68ea-4408-8d07-f691e16b4484","Type":"ContainerStarted","Data":"3b58b9094a2328faafc94dce3876b73d0b3258c51f1d505f7f9d1c3a14c3608d"} Jan 26 14:12:54 crc kubenswrapper[4922]: I0126 14:12:54.945228 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tbdp" event={"ID":"eda39827-b747-4e2e-9c8c-5f699cdf4a96","Type":"ContainerStarted","Data":"821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3"} Jan 26 14:12:54 crc kubenswrapper[4922]: I0126 14:12:54.948816 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336","Type":"ContainerStarted","Data":"7abe2166741fdadec5e5ff4abaf24963ae6183f41734e98ecb50b015433a12dd"} Jan 26 14:12:54 crc kubenswrapper[4922]: I0126 14:12:54.961514 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pzxnt" podStartSLOduration=171.961501311 podStartE2EDuration="2m51.961501311s" podCreationTimestamp="2026-01-26 14:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:54.959635652 +0000 UTC m=+192.161898434" watchObservedRunningTime="2026-01-26 14:12:54.961501311 +0000 UTC m=+192.163764083" Jan 26 14:12:54 crc kubenswrapper[4922]: I0126 14:12:54.982956 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2tbdp" podStartSLOduration=3.587424649 podStartE2EDuration="47.982932405s" podCreationTimestamp="2026-01-26 14:12:07 +0000 UTC" firstStartedPulling="2026-01-26 14:12:10.216343832 +0000 UTC m=+147.418606614" lastFinishedPulling="2026-01-26 14:12:54.611851598 +0000 UTC m=+191.814114370" observedRunningTime="2026-01-26 14:12:54.980475528 +0000 UTC m=+192.182738300" watchObservedRunningTime="2026-01-26 14:12:54.982932405 +0000 UTC m=+192.185195187" Jan 26 14:12:55 crc kubenswrapper[4922]: I0126 14:12:55.004315 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.004289555 podStartE2EDuration="4.004289555s" podCreationTimestamp="2026-01-26 14:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:55.002058385 +0000 UTC m=+192.204321167" watchObservedRunningTime="2026-01-26 14:12:55.004289555 +0000 UTC m=+192.206552327" Jan 26 14:12:55 crc kubenswrapper[4922]: E0126 14:12:55.087215 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod0c6435c3_0ab3_4a2b_a3cf_f82bcdcc4336.slice/crio-conmon-7abe2166741fdadec5e5ff4abaf24963ae6183f41734e98ecb50b015433a12dd.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:12:55 crc kubenswrapper[4922]: I0126 14:12:55.956534 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336" containerID="7abe2166741fdadec5e5ff4abaf24963ae6183f41734e98ecb50b015433a12dd" exitCode=0 Jan 26 14:12:55 crc kubenswrapper[4922]: I0126 14:12:55.956649 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336","Type":"ContainerDied","Data":"7abe2166741fdadec5e5ff4abaf24963ae6183f41734e98ecb50b015433a12dd"} Jan 26 14:12:56 crc kubenswrapper[4922]: I0126 14:12:56.838287 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 14:12:56 crc kubenswrapper[4922]: I0126 14:12:56.839904 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:56 crc kubenswrapper[4922]: I0126 14:12:56.851758 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 14:12:56 crc kubenswrapper[4922]: I0126 14:12:56.915597 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ead0f98-f19c-47f8-b361-5c451349ab0e-kube-api-access\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:56 crc kubenswrapper[4922]: I0126 14:12:56.915641 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:56 crc kubenswrapper[4922]: I0126 14:12:56.915683 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-var-lock\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.016775 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ead0f98-f19c-47f8-b361-5c451349ab0e-kube-api-access\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.017146 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.017222 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-var-lock\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.017304 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-var-lock\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.017619 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.039510 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ead0f98-f19c-47f8-b361-5c451349ab0e-kube-api-access\") pod \"installer-9-crc\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.170740 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.253528 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.320189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kube-api-access\") pod \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.320625 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kubelet-dir\") pod \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\" (UID: \"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336\") " Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.320765 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336" (UID: "0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.321102 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.332299 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336" (UID: "0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.422499 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.615117 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.976718 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ead0f98-f19c-47f8-b361-5c451349ab0e","Type":"ContainerStarted","Data":"042ab10607e36ecd93e076aabd7f9308f6259f1b26f060e3889572e6fbef3b39"} Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.977119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ead0f98-f19c-47f8-b361-5c451349ab0e","Type":"ContainerStarted","Data":"f11ec49390bea2792d96bc943774d77351e1e92974295ae380a86ec8a2e87a4b"} Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.982233 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336","Type":"ContainerDied","Data":"d7c227b1be5b386f2f4bb6d1a67db975ea74aa0b8aa13d0ee2c2835626387922"} Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.982291 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c227b1be5b386f2f4bb6d1a67db975ea74aa0b8aa13d0ee2c2835626387922" Jan 26 14:12:57 crc kubenswrapper[4922]: I0126 14:12:57.982367 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:12:58 crc kubenswrapper[4922]: I0126 14:12:58.047945 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:58 crc kubenswrapper[4922]: I0126 14:12:58.054817 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:58 crc kubenswrapper[4922]: I0126 14:12:58.150081 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:12:58 crc kubenswrapper[4922]: I0126 14:12:58.177136 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.177118873 podStartE2EDuration="2.177118873s" podCreationTimestamp="2026-01-26 14:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:12:58.001156535 +0000 UTC m=+195.203419307" watchObservedRunningTime="2026-01-26 14:12:58.177118873 +0000 UTC m=+195.379381645" Jan 26 14:13:00 crc kubenswrapper[4922]: I0126 14:13:00.069446 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:13:05 crc kubenswrapper[4922]: I0126 14:13:05.028891 4922 generic.go:334] "Generic (PLEG): container finished" podID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerID="7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762" exitCode=0 Jan 26 14:13:05 crc kubenswrapper[4922]: I0126 14:13:05.028938 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tj7hb" event={"ID":"cfcca17c-5b8e-42fa-8fa2-56139592b85b","Type":"ContainerDied","Data":"7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762"} Jan 26 14:13:06 crc kubenswrapper[4922]: I0126 14:13:06.037647 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerStarted","Data":"5a64cdfe126a82d117bc8605d33fc4351c6c0bcffded993971073b4b0c2be0d0"} Jan 26 14:13:06 crc kubenswrapper[4922]: I0126 14:13:06.040271 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerStarted","Data":"179e0259dc3717abca55152ab3af4cc1b1be424235acab391ce0e0c5e38b90a6"} Jan 26 14:13:06 crc kubenswrapper[4922]: I0126 14:13:06.043371 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tj7hb" event={"ID":"cfcca17c-5b8e-42fa-8fa2-56139592b85b","Type":"ContainerStarted","Data":"d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030"} Jan 26 14:13:06 crc kubenswrapper[4922]: I0126 14:13:06.114371 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tj7hb" podStartSLOduration=3.052898825 podStartE2EDuration="56.114347193s" podCreationTimestamp="2026-01-26 14:12:10 +0000 UTC" firstStartedPulling="2026-01-26 14:12:12.478353728 +0000 UTC m=+149.680616500" lastFinishedPulling="2026-01-26 14:13:05.539802096 +0000 UTC m=+202.742064868" observedRunningTime="2026-01-26 14:13:06.1121355 +0000 UTC m=+203.314398272" watchObservedRunningTime="2026-01-26 14:13:06.114347193 +0000 UTC m=+203.316609965" Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.051144 4922 generic.go:334] "Generic (PLEG): container finished" podID="9483648b-7a48-480a-8097-5e08962e36ce" containerID="179e0259dc3717abca55152ab3af4cc1b1be424235acab391ce0e0c5e38b90a6" exitCode=0 Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.051482 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerDied","Data":"179e0259dc3717abca55152ab3af4cc1b1be424235acab391ce0e0c5e38b90a6"} Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.053567 4922 generic.go:334] "Generic (PLEG): container finished" podID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerID="52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68" exitCode=0 Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.053632 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cvbq" event={"ID":"95a340dd-cf35-496b-aae2-9190b1b24d2b","Type":"ContainerDied","Data":"52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68"} Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.059450 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerStarted","Data":"ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87"} Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.062033 4922 generic.go:334] "Generic (PLEG): container finished" podID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerID="5a64cdfe126a82d117bc8605d33fc4351c6c0bcffded993971073b4b0c2be0d0" exitCode=0 Jan 26 14:13:07 crc kubenswrapper[4922]: I0126 14:13:07.062057 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerDied","Data":"5a64cdfe126a82d117bc8605d33fc4351c6c0bcffded993971073b4b0c2be0d0"} Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.072435 4922 generic.go:334] "Generic (PLEG): container finished" podID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerID="ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87" exitCode=0 Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.072938 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerDied","Data":"ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87"} Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.079974 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerStarted","Data":"8d3a7e0379eeaa4da5aab9f52569b2b765ab2d7d845780109c90f87e64e0e7f7"} Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.090537 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerStarted","Data":"cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581"} Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.096808 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cvbq" event={"ID":"95a340dd-cf35-496b-aae2-9190b1b24d2b","Type":"ContainerStarted","Data":"92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c"} Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.161380 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5cvbq" podStartSLOduration=2.9900666080000002 podStartE2EDuration="59.161359957s" podCreationTimestamp="2026-01-26 14:12:09 +0000 UTC" firstStartedPulling="2026-01-26 14:12:11.284963501 +0000 UTC m=+148.487226273" lastFinishedPulling="2026-01-26 14:13:07.45625685 +0000 UTC m=+204.658519622" observedRunningTime="2026-01-26 14:13:08.153424138 +0000 UTC m=+205.355686910" watchObservedRunningTime="2026-01-26 14:13:08.161359957 +0000 UTC m=+205.363622759" Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.184489 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cchdg" podStartSLOduration=2.990398597 podStartE2EDuration="58.184466267s" podCreationTimestamp="2026-01-26 14:12:10 +0000 UTC" firstStartedPulling="2026-01-26 14:12:12.3981885 +0000 UTC m=+149.600451272" lastFinishedPulling="2026-01-26 14:13:07.59225616 +0000 UTC m=+204.794518942" observedRunningTime="2026-01-26 14:13:08.182931941 +0000 UTC m=+205.385194723" watchObservedRunningTime="2026-01-26 14:13:08.184466267 +0000 UTC m=+205.386729039" Jan 26 14:13:08 crc kubenswrapper[4922]: I0126 14:13:08.218765 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lrldq" podStartSLOduration=4.234300677 podStartE2EDuration="59.218749544s" podCreationTimestamp="2026-01-26 14:12:09 +0000 UTC" firstStartedPulling="2026-01-26 14:12:12.470644076 +0000 UTC m=+149.672906848" lastFinishedPulling="2026-01-26 14:13:07.455092943 +0000 UTC m=+204.657355715" observedRunningTime="2026-01-26 14:13:08.215422835 +0000 UTC m=+205.417685607" watchObservedRunningTime="2026-01-26 14:13:08.218749544 +0000 UTC m=+205.421012316" Jan 26 14:13:09 crc kubenswrapper[4922]: I0126 14:13:09.109916 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerStarted","Data":"5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6"} Jan 26 14:13:09 crc kubenswrapper[4922]: I0126 14:13:09.133509 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9kr2" podStartSLOduration=3.623083256 podStartE2EDuration="1m2.133478755s" podCreationTimestamp="2026-01-26 14:12:07 +0000 UTC" firstStartedPulling="2026-01-26 14:12:10.206254926 +0000 UTC m=+147.408517708" lastFinishedPulling="2026-01-26 14:13:08.716650435 +0000 UTC m=+205.918913207" observedRunningTime="2026-01-26 14:13:09.131398845 +0000 UTC m=+206.333661617" watchObservedRunningTime="2026-01-26 14:13:09.133478755 +0000 UTC m=+206.335741527" Jan 26 14:13:09 crc kubenswrapper[4922]: I0126 14:13:09.984514 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:13:09 crc kubenswrapper[4922]: I0126 14:13:09.984865 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.024288 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.121955 4922 generic.go:334] "Generic (PLEG): container finished" podID="529fbc62-acac-4f76-92b5-2519ab246802" containerID="0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7" exitCode=0 Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.122119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbzf" event={"ID":"529fbc62-acac-4f76-92b5-2519ab246802","Type":"ContainerDied","Data":"0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7"} Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.128411 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerStarted","Data":"73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750"} Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.349922 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.350313 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.393185 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.730257 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.730644 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.945251 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:13:10 crc kubenswrapper[4922]: I0126 14:13:10.946396 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.134643 4922 generic.go:334] "Generic (PLEG): container finished" podID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerID="73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750" exitCode=0 Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.134708 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerDied","Data":"73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750"} Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.139375 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbzf" event={"ID":"529fbc62-acac-4f76-92b5-2519ab246802","Type":"ContainerStarted","Data":"f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc"} Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.186227 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtbzf" podStartSLOduration=3.857047644 podStartE2EDuration="1m4.186208375s" podCreationTimestamp="2026-01-26 14:12:07 +0000 UTC" firstStartedPulling="2026-01-26 14:12:10.212238963 +0000 UTC m=+147.414501735" lastFinishedPulling="2026-01-26 14:13:10.541399694 +0000 UTC m=+207.743662466" observedRunningTime="2026-01-26 14:13:11.183718316 +0000 UTC m=+208.385981088" watchObservedRunningTime="2026-01-26 14:13:11.186208375 +0000 UTC m=+208.388471147" Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.306515 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.306928 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.307126 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.307855 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.308019 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117" gracePeriod=600 Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.781919 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tj7hb" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="registry-server" probeResult="failure" output=< Jan 26 14:13:11 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:13:11 crc kubenswrapper[4922]: > Jan 26 14:13:11 crc kubenswrapper[4922]: I0126 14:13:11.987096 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cchdg" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="registry-server" probeResult="failure" output=< Jan 26 14:13:11 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:13:11 crc kubenswrapper[4922]: > Jan 26 14:13:12 crc kubenswrapper[4922]: I0126 14:13:12.149439 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117" exitCode=0 Jan 26 14:13:12 crc kubenswrapper[4922]: I0126 14:13:12.149515 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117"} Jan 26 14:13:12 crc kubenswrapper[4922]: I0126 14:13:12.206744 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:13:13 crc kubenswrapper[4922]: I0126 14:13:13.158418 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"e6c7dbd41ad163fa2c442937e7ba458ff681ccb83b22aad8b12d1a7403d8aa48"} Jan 26 14:13:15 crc kubenswrapper[4922]: I0126 14:13:15.172343 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerStarted","Data":"f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322"} Jan 26 14:13:15 crc kubenswrapper[4922]: I0126 14:13:15.198427 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l7xmz" podStartSLOduration=3.860749036 podStartE2EDuration="1m8.198402504s" podCreationTimestamp="2026-01-26 14:12:07 +0000 UTC" firstStartedPulling="2026-01-26 14:12:10.235194204 +0000 UTC m=+147.437457006" lastFinishedPulling="2026-01-26 14:13:14.572847702 +0000 UTC m=+211.775110474" observedRunningTime="2026-01-26 14:13:15.193542888 +0000 UTC m=+212.395805660" watchObservedRunningTime="2026-01-26 14:13:15.198402504 +0000 UTC m=+212.400665276" Jan 26 14:13:16 crc kubenswrapper[4922]: I0126 14:13:16.479937 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrldq"] Jan 26 14:13:16 crc kubenswrapper[4922]: I0126 14:13:16.480233 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lrldq" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="registry-server" containerID="cri-o://cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581" gracePeriod=2 Jan 26 14:13:17 crc kubenswrapper[4922]: I0126 14:13:17.546309 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:13:17 crc kubenswrapper[4922]: I0126 14:13:17.546740 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:13:17 crc kubenswrapper[4922]: I0126 14:13:17.624652 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:13:18 crc kubenswrapper[4922]: I0126 14:13:18.276890 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:13:18 crc kubenswrapper[4922]: I0126 14:13:18.276973 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:13:18 crc kubenswrapper[4922]: I0126 14:13:18.277243 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:13:18 crc kubenswrapper[4922]: I0126 14:13:18.277295 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:13:18 crc kubenswrapper[4922]: I0126 14:13:18.363528 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:13:18 crc kubenswrapper[4922]: I0126 14:13:18.364828 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:13:19 crc kubenswrapper[4922]: I0126 14:13:19.271174 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:13:19 crc kubenswrapper[4922]: I0126 14:13:19.315058 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:13:20 crc kubenswrapper[4922]: I0126 14:13:20.043352 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:13:20 crc kubenswrapper[4922]: I0126 14:13:20.209555 4922 generic.go:334] "Generic (PLEG): container finished" podID="9483648b-7a48-480a-8097-5e08962e36ce" containerID="cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581" exitCode=0 Jan 26 14:13:20 crc kubenswrapper[4922]: I0126 14:13:20.209620 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerDied","Data":"cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581"} Jan 26 14:13:20 crc kubenswrapper[4922]: I0126 14:13:20.284570 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtbzf"] Jan 26 14:13:20 crc kubenswrapper[4922]: E0126 14:13:20.351201 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581 is running failed: container process not found" containerID="cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 14:13:20 crc kubenswrapper[4922]: E0126 14:13:20.351732 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581 is running failed: container process not found" containerID="cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 14:13:20 crc kubenswrapper[4922]: E0126 14:13:20.352009 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581 is running failed: container process not found" containerID="cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 14:13:20 crc kubenswrapper[4922]: E0126 14:13:20.352050 4922 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-lrldq" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="registry-server" Jan 26 14:13:20 crc kubenswrapper[4922]: I0126 14:13:20.798673 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:13:20 crc kubenswrapper[4922]: I0126 14:13:20.870843 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.000914 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.070904 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.283350 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9kr2"] Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.511576 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.688107 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-utilities\") pod \"9483648b-7a48-480a-8097-5e08962e36ce\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.688176 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-catalog-content\") pod \"9483648b-7a48-480a-8097-5e08962e36ce\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.688244 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78ftm\" (UniqueName: \"kubernetes.io/projected/9483648b-7a48-480a-8097-5e08962e36ce-kube-api-access-78ftm\") pod \"9483648b-7a48-480a-8097-5e08962e36ce\" (UID: \"9483648b-7a48-480a-8097-5e08962e36ce\") " Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.689093 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-utilities" (OuterVolumeSpecName: "utilities") pod "9483648b-7a48-480a-8097-5e08962e36ce" (UID: "9483648b-7a48-480a-8097-5e08962e36ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.712169 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9483648b-7a48-480a-8097-5e08962e36ce" (UID: "9483648b-7a48-480a-8097-5e08962e36ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.712310 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9483648b-7a48-480a-8097-5e08962e36ce-kube-api-access-78ftm" (OuterVolumeSpecName: "kube-api-access-78ftm") pod "9483648b-7a48-480a-8097-5e08962e36ce" (UID: "9483648b-7a48-480a-8097-5e08962e36ce"). InnerVolumeSpecName "kube-api-access-78ftm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.790238 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.790279 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9483648b-7a48-480a-8097-5e08962e36ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:21 crc kubenswrapper[4922]: I0126 14:13:21.790293 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78ftm\" (UniqueName: \"kubernetes.io/projected/9483648b-7a48-480a-8097-5e08962e36ce-kube-api-access-78ftm\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.223732 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrldq" event={"ID":"9483648b-7a48-480a-8097-5e08962e36ce","Type":"ContainerDied","Data":"8ea08fc19c2a6db69ec69c61268ba2aa72dbe4ad810efe3f2b33fe89da5149b9"} Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.223803 4922 scope.go:117] "RemoveContainer" containerID="cf22a2a437749d46b691152c45e8ced829af83638e8e46d49d0937e601bb9581" Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.223858 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrldq" Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.223883 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9kr2" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="registry-server" containerID="cri-o://5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6" gracePeriod=2 Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.224272 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtbzf" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="registry-server" containerID="cri-o://f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc" gracePeriod=2 Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.247165 4922 scope.go:117] "RemoveContainer" containerID="179e0259dc3717abca55152ab3af4cc1b1be424235acab391ce0e0c5e38b90a6" Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.262742 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrldq"] Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.265356 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrldq"] Jan 26 14:13:22 crc kubenswrapper[4922]: I0126 14:13:22.276478 4922 scope.go:117] "RemoveContainer" containerID="976a6e05d0e4886566ff997f35d8f306ed356b5853e642f6590d262693ffdb68" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.105460 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9483648b-7a48-480a-8097-5e08962e36ce" path="/var/lib/kubelet/pods/9483648b-7a48-480a-8097-5e08962e36ce/volumes" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.792523 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.796315 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.934607 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4m6j\" (UniqueName: \"kubernetes.io/projected/529fbc62-acac-4f76-92b5-2519ab246802-kube-api-access-x4m6j\") pod \"529fbc62-acac-4f76-92b5-2519ab246802\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.935327 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pbfl\" (UniqueName: \"kubernetes.io/projected/676089f7-e97f-40b6-94ca-77d491dbf2a5-kube-api-access-5pbfl\") pod \"676089f7-e97f-40b6-94ca-77d491dbf2a5\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.935372 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-utilities\") pod \"676089f7-e97f-40b6-94ca-77d491dbf2a5\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.936422 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-utilities" (OuterVolumeSpecName: "utilities") pod "529fbc62-acac-4f76-92b5-2519ab246802" (UID: "529fbc62-acac-4f76-92b5-2519ab246802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.936651 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-utilities" (OuterVolumeSpecName: "utilities") pod "676089f7-e97f-40b6-94ca-77d491dbf2a5" (UID: "676089f7-e97f-40b6-94ca-77d491dbf2a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.936723 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-utilities\") pod \"529fbc62-acac-4f76-92b5-2519ab246802\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.936776 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-catalog-content\") pod \"676089f7-e97f-40b6-94ca-77d491dbf2a5\" (UID: \"676089f7-e97f-40b6-94ca-77d491dbf2a5\") " Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.936804 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-catalog-content\") pod \"529fbc62-acac-4f76-92b5-2519ab246802\" (UID: \"529fbc62-acac-4f76-92b5-2519ab246802\") " Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.940944 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529fbc62-acac-4f76-92b5-2519ab246802-kube-api-access-x4m6j" (OuterVolumeSpecName: "kube-api-access-x4m6j") pod "529fbc62-acac-4f76-92b5-2519ab246802" (UID: "529fbc62-acac-4f76-92b5-2519ab246802"). InnerVolumeSpecName "kube-api-access-x4m6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.941029 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676089f7-e97f-40b6-94ca-77d491dbf2a5-kube-api-access-5pbfl" (OuterVolumeSpecName: "kube-api-access-5pbfl") pod "676089f7-e97f-40b6-94ca-77d491dbf2a5" (UID: "676089f7-e97f-40b6-94ca-77d491dbf2a5"). InnerVolumeSpecName "kube-api-access-5pbfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.942703 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pbfl\" (UniqueName: \"kubernetes.io/projected/676089f7-e97f-40b6-94ca-77d491dbf2a5-kube-api-access-5pbfl\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.942752 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.942769 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:23 crc kubenswrapper[4922]: I0126 14:13:23.942782 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4m6j\" (UniqueName: \"kubernetes.io/projected/529fbc62-acac-4f76-92b5-2519ab246802-kube-api-access-x4m6j\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.004233 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "529fbc62-acac-4f76-92b5-2519ab246802" (UID: "529fbc62-acac-4f76-92b5-2519ab246802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.005184 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "676089f7-e97f-40b6-94ca-77d491dbf2a5" (UID: "676089f7-e97f-40b6-94ca-77d491dbf2a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.044326 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676089f7-e97f-40b6-94ca-77d491dbf2a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.044357 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/529fbc62-acac-4f76-92b5-2519ab246802-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.245176 4922 generic.go:334] "Generic (PLEG): container finished" podID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerID="5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6" exitCode=0 Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.245284 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerDied","Data":"5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6"} Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.245287 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9kr2" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.245345 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9kr2" event={"ID":"676089f7-e97f-40b6-94ca-77d491dbf2a5","Type":"ContainerDied","Data":"efe549a13da8fc63c0dd8d70ee68a56f03b686c15a0a2a65bc77285eedb4ce33"} Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.245379 4922 scope.go:117] "RemoveContainer" containerID="5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.248790 4922 generic.go:334] "Generic (PLEG): container finished" podID="529fbc62-acac-4f76-92b5-2519ab246802" containerID="f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc" exitCode=0 Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.248817 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbzf" event={"ID":"529fbc62-acac-4f76-92b5-2519ab246802","Type":"ContainerDied","Data":"f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc"} Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.248833 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbzf" event={"ID":"529fbc62-acac-4f76-92b5-2519ab246802","Type":"ContainerDied","Data":"5b500d51802be5f6af027cda0a076613704e117ea427973ce78217eec870812c"} Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.248940 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbzf" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.267870 4922 scope.go:117] "RemoveContainer" containerID="ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.297185 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9kr2"] Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.301296 4922 scope.go:117] "RemoveContainer" containerID="505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.301587 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9kr2"] Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.310206 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtbzf"] Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.314392 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtbzf"] Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.321898 4922 scope.go:117] "RemoveContainer" containerID="5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6" Jan 26 14:13:24 crc kubenswrapper[4922]: E0126 14:13:24.322422 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6\": container with ID starting with 5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6 not found: ID does not exist" containerID="5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.322456 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6"} err="failed to get container status \"5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6\": rpc error: code = NotFound desc = could not find container \"5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6\": container with ID starting with 5d3a2f95fea3930fa52b61ff779d3ca274ae52c7d6a7d2b5f1ab2a9d3ddbbaa6 not found: ID does not exist" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.322482 4922 scope.go:117] "RemoveContainer" containerID="ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87" Jan 26 14:13:24 crc kubenswrapper[4922]: E0126 14:13:24.322794 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87\": container with ID starting with ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87 not found: ID does not exist" containerID="ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.322821 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87"} err="failed to get container status \"ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87\": rpc error: code = NotFound desc = could not find container \"ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87\": container with ID starting with ddef180fe53377b3fda8e0bc8519e18c99a60da7918b8bfb0a3b99f3ec0aee87 not found: ID does not exist" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.322836 4922 scope.go:117] "RemoveContainer" containerID="505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331" Jan 26 14:13:24 crc kubenswrapper[4922]: E0126 14:13:24.323555 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331\": container with ID starting with 505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331 not found: ID does not exist" containerID="505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.323578 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331"} err="failed to get container status \"505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331\": rpc error: code = NotFound desc = could not find container \"505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331\": container with ID starting with 505f5209d998b868e1eaabc34a193621f7cf457098c933e4175e9db305038331 not found: ID does not exist" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.323591 4922 scope.go:117] "RemoveContainer" containerID="f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.341340 4922 scope.go:117] "RemoveContainer" containerID="0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.360777 4922 scope.go:117] "RemoveContainer" containerID="56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.384331 4922 scope.go:117] "RemoveContainer" containerID="f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc" Jan 26 14:13:24 crc kubenswrapper[4922]: E0126 14:13:24.384762 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc\": container with ID starting with f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc not found: ID does not exist" containerID="f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.384824 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc"} err="failed to get container status \"f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc\": rpc error: code = NotFound desc = could not find container \"f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc\": container with ID starting with f3133bcf95f2240b1007a1adc44fadcc32c9e37fbc8387c502ab5b8a9d5e23bc not found: ID does not exist" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.384865 4922 scope.go:117] "RemoveContainer" containerID="0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7" Jan 26 14:13:24 crc kubenswrapper[4922]: E0126 14:13:24.385794 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7\": container with ID starting with 0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7 not found: ID does not exist" containerID="0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.385851 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7"} err="failed to get container status \"0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7\": rpc error: code = NotFound desc = could not find container \"0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7\": container with ID starting with 0d1eae2c53723ebc7eff746c5ea24445a1899e6f8eee397f0bcf6383864174e7 not found: ID does not exist" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.385892 4922 scope.go:117] "RemoveContainer" containerID="56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a" Jan 26 14:13:24 crc kubenswrapper[4922]: E0126 14:13:24.386175 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a\": container with ID starting with 56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a not found: ID does not exist" containerID="56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.386200 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a"} err="failed to get container status \"56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a\": rpc error: code = NotFound desc = could not find container \"56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a\": container with ID starting with 56fdf4b9078aeeceb47484deb35c98dc531aaadd0a12041911af12846e75e65a not found: ID does not exist" Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.682041 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cchdg"] Jan 26 14:13:24 crc kubenswrapper[4922]: I0126 14:13:24.682349 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cchdg" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="registry-server" containerID="cri-o://8d3a7e0379eeaa4da5aab9f52569b2b765ab2d7d845780109c90f87e64e0e7f7" gracePeriod=2 Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.100768 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529fbc62-acac-4f76-92b5-2519ab246802" path="/var/lib/kubelet/pods/529fbc62-acac-4f76-92b5-2519ab246802/volumes" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.102044 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" path="/var/lib/kubelet/pods/676089f7-e97f-40b6-94ca-77d491dbf2a5/volumes" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.261312 4922 generic.go:334] "Generic (PLEG): container finished" podID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerID="8d3a7e0379eeaa4da5aab9f52569b2b765ab2d7d845780109c90f87e64e0e7f7" exitCode=0 Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.261377 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerDied","Data":"8d3a7e0379eeaa4da5aab9f52569b2b765ab2d7d845780109c90f87e64e0e7f7"} Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.616308 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.766738 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-catalog-content\") pod \"64870ab4-a50f-4e29-af84-3b2f63f16180\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.766846 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tghkw\" (UniqueName: \"kubernetes.io/projected/64870ab4-a50f-4e29-af84-3b2f63f16180-kube-api-access-tghkw\") pod \"64870ab4-a50f-4e29-af84-3b2f63f16180\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.766965 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-utilities\") pod \"64870ab4-a50f-4e29-af84-3b2f63f16180\" (UID: \"64870ab4-a50f-4e29-af84-3b2f63f16180\") " Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.767816 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-utilities" (OuterVolumeSpecName: "utilities") pod "64870ab4-a50f-4e29-af84-3b2f63f16180" (UID: "64870ab4-a50f-4e29-af84-3b2f63f16180"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.772511 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64870ab4-a50f-4e29-af84-3b2f63f16180-kube-api-access-tghkw" (OuterVolumeSpecName: "kube-api-access-tghkw") pod "64870ab4-a50f-4e29-af84-3b2f63f16180" (UID: "64870ab4-a50f-4e29-af84-3b2f63f16180"). InnerVolumeSpecName "kube-api-access-tghkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.868271 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tghkw\" (UniqueName: \"kubernetes.io/projected/64870ab4-a50f-4e29-af84-3b2f63f16180-kube-api-access-tghkw\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.868326 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.889490 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64870ab4-a50f-4e29-af84-3b2f63f16180" (UID: "64870ab4-a50f-4e29-af84-3b2f63f16180"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:13:25 crc kubenswrapper[4922]: I0126 14:13:25.968995 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64870ab4-a50f-4e29-af84-3b2f63f16180-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.269838 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cchdg" event={"ID":"64870ab4-a50f-4e29-af84-3b2f63f16180","Type":"ContainerDied","Data":"735dfc5004d2cbce4fbc05b181a4b158af2abf197d8fc543aae0ea78ba5bdaf9"} Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.269907 4922 scope.go:117] "RemoveContainer" containerID="8d3a7e0379eeaa4da5aab9f52569b2b765ab2d7d845780109c90f87e64e0e7f7" Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.270085 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cchdg" Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.295857 4922 scope.go:117] "RemoveContainer" containerID="5a64cdfe126a82d117bc8605d33fc4351c6c0bcffded993971073b4b0c2be0d0" Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.307321 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cchdg"] Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.317037 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cchdg"] Jan 26 14:13:26 crc kubenswrapper[4922]: I0126 14:13:26.324398 4922 scope.go:117] "RemoveContainer" containerID="9f915094cf4d63253edbfe63c49e8f453a33957927002f16575046bc8199e9b0" Jan 26 14:13:27 crc kubenswrapper[4922]: I0126 14:13:27.098963 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" path="/var/lib/kubelet/pods/64870ab4-a50f-4e29-af84-3b2f63f16180/volumes" Jan 26 14:13:27 crc kubenswrapper[4922]: I0126 14:13:27.593956 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:13:30 crc kubenswrapper[4922]: I0126 14:13:30.931297 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5lfvx"] Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.787019 4922 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.787855 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.787870 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.787882 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.787891 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.787926 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.787934 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.787948 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336" containerName="pruner" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.787956 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336" containerName="pruner" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.787963 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.787970 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.787981 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788006 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788022 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788030 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788041 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788049 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788057 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788089 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788101 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788108 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788121 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788129 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="extract-utilities" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788166 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788175 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="extract-content" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.788188 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788199 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788351 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9483648b-7a48-480a-8097-5e08962e36ce" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788365 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="64870ab4-a50f-4e29-af84-3b2f63f16180" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788377 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="676089f7-e97f-40b6-94ca-77d491dbf2a5" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788409 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6435c3-0ab3-4a2b-a3cf-f82bcdcc4336" containerName="pruner" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788422 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="529fbc62-acac-4f76-92b5-2519ab246802" containerName="registry-server" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.788891 4922 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.789117 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.789273 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5" gracePeriod=15 Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.789329 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d" gracePeriod=15 Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.789357 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306" gracePeriod=15 Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.789462 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe" gracePeriod=15 Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.789468 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd" gracePeriod=15 Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792330 4922 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.792733 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792761 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.792795 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792807 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.792823 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792835 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.792854 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792866 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.792888 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792919 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.792953 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.792965 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793216 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793256 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793281 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793306 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793326 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793343 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.793508 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.793523 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814326 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814398 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814435 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814477 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814510 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814905 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.814968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.815058 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: E0126 14:13:35.854658 4922 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.916799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.916849 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.916890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.916922 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.916963 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.916985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917017 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917036 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917086 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917147 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917108 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917168 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917237 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917329 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:35 crc kubenswrapper[4922]: I0126 14:13:35.917405 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.155950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:36 crc kubenswrapper[4922]: E0126 14:13:36.188591 4922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4d68d1609a15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:13:36.187890197 +0000 UTC m=+233.390153009,LastTimestamp:2026-01-26 14:13:36.187890197 +0000 UTC m=+233.390153009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.330596 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fddb7ef8a87decfcf3181db6ab5838ff78ac4c92f857db6d301c88dadd3b4c90"} Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.335267 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.337482 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.338551 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd" exitCode=0 Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.338582 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d" exitCode=0 Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.338596 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306" exitCode=0 Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.338608 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe" exitCode=2 Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.338685 4922 scope.go:117] "RemoveContainer" containerID="3e00d53aea049d30fa4d9dbbea7198f301f87f1bba77ba301b2606da3253661f" Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.340714 4922 generic.go:334] "Generic (PLEG): container finished" podID="0ead0f98-f19c-47f8-b361-5c451349ab0e" containerID="042ab10607e36ecd93e076aabd7f9308f6259f1b26f060e3889572e6fbef3b39" exitCode=0 Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.340769 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ead0f98-f19c-47f8-b361-5c451349ab0e","Type":"ContainerDied","Data":"042ab10607e36ecd93e076aabd7f9308f6259f1b26f060e3889572e6fbef3b39"} Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.341905 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:36 crc kubenswrapper[4922]: I0126 14:13:36.343304 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.352484 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.360243 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc"} Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.362678 4922 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.362914 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.658176 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.659384 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.659635 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.659811 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.659976 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.660006 4922 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.660181 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.677144 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.677941 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.857843 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ead0f98-f19c-47f8-b361-5c451349ab0e-kube-api-access\") pod \"0ead0f98-f19c-47f8-b361-5c451349ab0e\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.857928 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-var-lock\") pod \"0ead0f98-f19c-47f8-b361-5c451349ab0e\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.857995 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-kubelet-dir\") pod \"0ead0f98-f19c-47f8-b361-5c451349ab0e\" (UID: \"0ead0f98-f19c-47f8-b361-5c451349ab0e\") " Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.859710 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0ead0f98-f19c-47f8-b361-5c451349ab0e" (UID: "0ead0f98-f19c-47f8-b361-5c451349ab0e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:13:37 crc kubenswrapper[4922]: E0126 14:13:37.873261 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.884467 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-var-lock" (OuterVolumeSpecName: "var-lock") pod "0ead0f98-f19c-47f8-b361-5c451349ab0e" (UID: "0ead0f98-f19c-47f8-b361-5c451349ab0e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.885878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ead0f98-f19c-47f8-b361-5c451349ab0e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0ead0f98-f19c-47f8-b361-5c451349ab0e" (UID: "0ead0f98-f19c-47f8-b361-5c451349ab0e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.960563 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ead0f98-f19c-47f8-b361-5c451349ab0e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.960606 4922 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:37 crc kubenswrapper[4922]: I0126 14:13:37.960619 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ead0f98-f19c-47f8-b361-5c451349ab0e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.159230 4922 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" volumeName="registry-storage" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.163960 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.164582 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.165099 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.165470 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.264216 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.264258 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.264302 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.264395 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.264465 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.264463 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.275060 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.365825 4922 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.365899 4922 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.365922 4922 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.370211 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.372127 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5" exitCode=0 Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.372319 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.372405 4922 scope.go:117] "RemoveContainer" containerID="9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.374625 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ead0f98-f19c-47f8-b361-5c451349ab0e","Type":"ContainerDied","Data":"f11ec49390bea2792d96bc943774d77351e1e92974295ae380a86ec8a2e87a4b"} Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.374690 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11ec49390bea2792d96bc943774d77351e1e92974295ae380a86ec8a2e87a4b" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.374647 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.375926 4922 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.391452 4922 scope.go:117] "RemoveContainer" containerID="c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.406991 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.407922 4922 scope.go:117] "RemoveContainer" containerID="04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.407974 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.408546 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.409039 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.423770 4922 scope.go:117] "RemoveContainer" containerID="1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.439363 4922 scope.go:117] "RemoveContainer" containerID="73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.457291 4922 scope.go:117] "RemoveContainer" containerID="0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.479456 4922 scope.go:117] "RemoveContainer" containerID="9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.480217 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\": container with ID starting with 9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd not found: ID does not exist" containerID="9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.480289 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd"} err="failed to get container status \"9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\": rpc error: code = NotFound desc = could not find container \"9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd\": container with ID starting with 9f4a40835bb5bd2160fe2a73da8fd44475077fe8f4870b30a20569e0ba44debd not found: ID does not exist" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.480338 4922 scope.go:117] "RemoveContainer" containerID="c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.483906 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\": container with ID starting with c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d not found: ID does not exist" containerID="c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.483955 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d"} err="failed to get container status \"c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\": rpc error: code = NotFound desc = could not find container \"c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d\": container with ID starting with c17aa7e9919b92df9ef3e219e94e329a5a6f7395be258ecc5ae0b87eb7feff3d not found: ID does not exist" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.483989 4922 scope.go:117] "RemoveContainer" containerID="04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.484551 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\": container with ID starting with 04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306 not found: ID does not exist" containerID="04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.484605 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306"} err="failed to get container status \"04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\": rpc error: code = NotFound desc = could not find container \"04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306\": container with ID starting with 04a8c94fa5b48d8a9d3e74c3a35919d11a2d62ee0067c59b4ab06a5c8f5cf306 not found: ID does not exist" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.484632 4922 scope.go:117] "RemoveContainer" containerID="1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.485129 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\": container with ID starting with 1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe not found: ID does not exist" containerID="1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.485182 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe"} err="failed to get container status \"1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\": rpc error: code = NotFound desc = could not find container \"1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe\": container with ID starting with 1072332b19e2b7488c0cfb079514d35c4f3833ee1a801d53c17e4657375c09fe not found: ID does not exist" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.485220 4922 scope.go:117] "RemoveContainer" containerID="73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.485684 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\": container with ID starting with 73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5 not found: ID does not exist" containerID="73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.485734 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5"} err="failed to get container status \"73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\": rpc error: code = NotFound desc = could not find container \"73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5\": container with ID starting with 73b3b9f7e7d4a7ca2844b12e256db0004a71a1674fab93a8391574d5e1caffd5 not found: ID does not exist" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.485765 4922 scope.go:117] "RemoveContainer" containerID="0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9" Jan 26 14:13:38 crc kubenswrapper[4922]: E0126 14:13:38.487034 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\": container with ID starting with 0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9 not found: ID does not exist" containerID="0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9" Jan 26 14:13:38 crc kubenswrapper[4922]: I0126 14:13:38.487105 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9"} err="failed to get container status \"0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\": rpc error: code = NotFound desc = could not find container \"0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9\": container with ID starting with 0503e47733a3b7a5952070721afa2f9e559d85b2f029867d779edf61c0f373f9 not found: ID does not exist" Jan 26 14:13:39 crc kubenswrapper[4922]: E0126 14:13:39.076754 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Jan 26 14:13:39 crc kubenswrapper[4922]: I0126 14:13:39.102495 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 14:13:40 crc kubenswrapper[4922]: E0126 14:13:40.678490 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Jan 26 14:13:43 crc kubenswrapper[4922]: I0126 14:13:43.098980 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:43 crc kubenswrapper[4922]: E0126 14:13:43.880487 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="6.4s" Jan 26 14:13:44 crc kubenswrapper[4922]: E0126 14:13:44.911716 4922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4d68d1609a15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:13:36.187890197 +0000 UTC m=+233.390153009,LastTimestamp:2026-01-26 14:13:36.187890197 +0000 UTC m=+233.390153009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:13:48 crc kubenswrapper[4922]: I0126 14:13:48.092499 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:48 crc kubenswrapper[4922]: I0126 14:13:48.093965 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:48 crc kubenswrapper[4922]: I0126 14:13:48.124694 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:48 crc kubenswrapper[4922]: I0126 14:13:48.124761 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:48 crc kubenswrapper[4922]: E0126 14:13:48.126195 4922 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:48 crc kubenswrapper[4922]: I0126 14:13:48.127049 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:48 crc kubenswrapper[4922]: W0126 14:13:48.164177 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-012e6829023c32bbcf107725862bb6a6fa4ee68fdd9771ceec799b21907b7ca5 WatchSource:0}: Error finding container 012e6829023c32bbcf107725862bb6a6fa4ee68fdd9771ceec799b21907b7ca5: Status 404 returned error can't find the container with id 012e6829023c32bbcf107725862bb6a6fa4ee68fdd9771ceec799b21907b7ca5 Jan 26 14:13:48 crc kubenswrapper[4922]: I0126 14:13:48.448523 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"012e6829023c32bbcf107725862bb6a6fa4ee68fdd9771ceec799b21907b7ca5"} Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.462510 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.462595 4922 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747" exitCode=1 Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.462661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747"} Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.464175 4922 scope.go:117] "RemoveContainer" containerID="afddbb8d84a9103a60710a29a270ae00a262d7eee1912e23eb2a66ff34bbf747" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.464761 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.465059 4922 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2ddad3de0ef08c933a52f132fd7090684b219fc2241a6f05332d7135b2e8683e" exitCode=0 Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.465166 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2ddad3de0ef08c933a52f132fd7090684b219fc2241a6f05332d7135b2e8683e"} Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.465535 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.465702 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.465745 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.466182 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:49 crc kubenswrapper[4922]: E0126 14:13:49.466331 4922 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:49 crc kubenswrapper[4922]: I0126 14:13:49.466700 4922 status_manager.go:851] "Failed to get status for pod" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 26 14:13:50 crc kubenswrapper[4922]: I0126 14:13:50.476894 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 14:13:50 crc kubenswrapper[4922]: I0126 14:13:50.477304 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc441bcb36f808c6fdeab1276e4eed211fa1347b51b79298103354caa964a0ac"} Jan 26 14:13:50 crc kubenswrapper[4922]: I0126 14:13:50.479510 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"68fa52ce251ddebaa29313453a6c1fbc3252ab360ead0e98a195998c522c6e21"} Jan 26 14:13:50 crc kubenswrapper[4922]: I0126 14:13:50.479577 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4465fdfab8cd2159c0c157bd5f49c641f29f5d13d04758833f454b322cc59193"} Jan 26 14:13:50 crc kubenswrapper[4922]: I0126 14:13:50.479599 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ebc042f140e680ce88296a517a7c9c676ec683fee974c136e14d4d33f72bcfae"} Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.348333 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.355346 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.494721 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c46712e8abbe747d07f021c66e00e4bf0c70617ac651843ee3503c4d48341679"} Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.494817 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c9b2968b364e889e02aebd48f991dd04e15280719672afac94cc4f321937eaae"} Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.495108 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.495167 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.495361 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:51 crc kubenswrapper[4922]: I0126 14:13:51.495393 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:53 crc kubenswrapper[4922]: I0126 14:13:53.128050 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:53 crc kubenswrapper[4922]: I0126 14:13:53.128181 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:53 crc kubenswrapper[4922]: I0126 14:13:53.135248 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:55 crc kubenswrapper[4922]: I0126 14:13:55.970711 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" podUID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" containerName="oauth-openshift" containerID="cri-o://fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18" gracePeriod=15 Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.473743 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.537530 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-router-certs\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.537795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-service-ca\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.537857 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6fb8\" (UniqueName: \"kubernetes.io/projected/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-kube-api-access-q6fb8\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.537888 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-dir\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.537995 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-ocp-branding-template\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.538092 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.538337 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-idp-0-file-data\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.538409 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-provider-selection\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.538487 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539049 4922 generic.go:334] "Generic (PLEG): container finished" podID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" containerID="fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18" exitCode=0 Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" event={"ID":"b7ff7450-4cc5-40df-a820-7cec4a3a9b95","Type":"ContainerDied","Data":"fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18"} Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539194 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" event={"ID":"b7ff7450-4cc5-40df-a820-7cec4a3a9b95","Type":"ContainerDied","Data":"e80546f5cc430fd1f1eb6fa156d4ba3412e71104c6a18efdfe4004a47f1032d7"} Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539240 4922 scope.go:117] "RemoveContainer" containerID="fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539311 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5lfvx" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539381 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539461 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-login\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539558 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539641 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-policies\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539682 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-session\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.539772 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-cliconfig\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.540703 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.540799 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-error\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541019 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-serving-cert\") pod \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\" (UID: \"b7ff7450-4cc5-40df-a820-7cec4a3a9b95\") " Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541225 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541493 4922 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541527 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541549 4922 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541574 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.541598 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.547136 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-kube-api-access-q6fb8" (OuterVolumeSpecName: "kube-api-access-q6fb8") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "kube-api-access-q6fb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.547054 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.547561 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.547971 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.549032 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.549734 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.550752 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.551263 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.552043 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7ff7450-4cc5-40df-a820-7cec4a3a9b95" (UID: "b7ff7450-4cc5-40df-a820-7cec4a3a9b95"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.611350 4922 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.632529 4922 scope.go:117] "RemoveContainer" containerID="fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18" Jan 26 14:13:56 crc kubenswrapper[4922]: E0126 14:13:56.633480 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18\": container with ID starting with fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18 not found: ID does not exist" containerID="fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.633735 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18"} err="failed to get container status \"fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18\": rpc error: code = NotFound desc = could not find container \"fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18\": container with ID starting with fd8c5898373ac325d057b64755f99abfa6a00a61ce4ed8f8c73aaca1e4307a18 not found: ID does not exist" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642633 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6fb8\" (UniqueName: \"kubernetes.io/projected/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-kube-api-access-q6fb8\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642682 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642696 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642710 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642726 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642740 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642752 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642765 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.642777 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7ff7450-4cc5-40df-a820-7cec4a3a9b95-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:13:56 crc kubenswrapper[4922]: I0126 14:13:56.751551 4922 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5308d5f5-6e9d-4064-a7a8-360e305e2c30" Jan 26 14:13:57 crc kubenswrapper[4922]: I0126 14:13:57.547828 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:57 crc kubenswrapper[4922]: I0126 14:13:57.549586 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:13:57 crc kubenswrapper[4922]: I0126 14:13:57.560865 4922 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5308d5f5-6e9d-4064-a7a8-360e305e2c30" Jan 26 14:13:57 crc kubenswrapper[4922]: E0126 14:13:57.664572 4922 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 26 14:14:04 crc kubenswrapper[4922]: I0126 14:14:04.462710 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:14:06 crc kubenswrapper[4922]: I0126 14:14:06.274782 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 14:14:06 crc kubenswrapper[4922]: I0126 14:14:06.517194 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 14:14:06 crc kubenswrapper[4922]: I0126 14:14:06.517387 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 14:14:06 crc kubenswrapper[4922]: I0126 14:14:06.840714 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 14:14:07 crc kubenswrapper[4922]: I0126 14:14:07.406729 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 14:14:07 crc kubenswrapper[4922]: I0126 14:14:07.906331 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.004737 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.060678 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.220711 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.483610 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.522961 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.568774 4922 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.747848 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 14:14:08 crc kubenswrapper[4922]: I0126 14:14:08.980299 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.090393 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.129509 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.178718 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.295432 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.342921 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.397803 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.480354 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.507715 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.513382 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.522153 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.632338 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.635269 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.643278 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.712232 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.822763 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 14:14:09 crc kubenswrapper[4922]: I0126 14:14:09.860918 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.049680 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.135169 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.208763 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.264433 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.268537 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.285524 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.362649 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.383748 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.530285 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.594984 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.620057 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.739437 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.814320 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.818925 4922 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.826457 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5lfvx","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.826548 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.827467 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.827543 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30ef84c6-ac27-443b-a9a7-37596edecde6" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.831135 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.834602 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.855834 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.855778953 podStartE2EDuration="14.855778953s" podCreationTimestamp="2026-01-26 14:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:14:10.852458328 +0000 UTC m=+268.054721180" watchObservedRunningTime="2026-01-26 14:14:10.855778953 +0000 UTC m=+268.058041745" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.982089 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.991846 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 14:14:10 crc kubenswrapper[4922]: I0126 14:14:10.999677 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.106121 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" path="/var/lib/kubelet/pods/b7ff7450-4cc5-40df-a820-7cec4a3a9b95/volumes" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.142917 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.154791 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.239241 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.289383 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.346949 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.354825 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.354969 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.407740 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.426023 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.495319 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.531080 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.618856 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.771874 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.794825 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.872618 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.893162 4922 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 14:14:11 crc kubenswrapper[4922]: I0126 14:14:11.969371 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.011300 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.075278 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.103620 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.129838 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.173237 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.247147 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.255235 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.265899 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.298379 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.357622 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.450586 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.453231 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.499423 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.525001 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.535574 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.583865 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.704736 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.741939 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.865170 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.912333 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.930470 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.969944 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.970664 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 14:14:12 crc kubenswrapper[4922]: I0126 14:14:12.976422 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.121053 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.253273 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.261950 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.331996 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.430981 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.497702 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.621852 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.623438 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.793417 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 14:14:13 crc kubenswrapper[4922]: I0126 14:14:13.952337 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.048589 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.237507 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.334954 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.400705 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.407920 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.535577 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.643537 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 14:14:14 crc kubenswrapper[4922]: I0126 14:14:14.750385 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.022904 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.050308 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.066402 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.159481 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.243778 4922 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.291415 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.405881 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.414687 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.479612 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.526950 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.576046 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.576428 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.587989 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-775b8f44b6-x5nrz"] Jan 26 14:14:15 crc kubenswrapper[4922]: E0126 14:14:15.588400 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" containerName="installer" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.588423 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" containerName="installer" Jan 26 14:14:15 crc kubenswrapper[4922]: E0126 14:14:15.588449 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" containerName="oauth-openshift" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.588462 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" containerName="oauth-openshift" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.588632 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ff7450-4cc5-40df-a820-7cec4a3a9b95" containerName="oauth-openshift" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.588652 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ead0f98-f19c-47f8-b361-5c451349ab0e" containerName="installer" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.589258 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.592462 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.592606 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.593133 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.593152 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.594799 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.595143 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.595143 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.595347 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.595889 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.596135 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.596722 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.596882 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.604681 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.604768 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.604814 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-session\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.604877 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-login\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.604921 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsn5l\" (UniqueName: \"kubernetes.io/projected/6c8e2c68-ae16-478d-bf61-910cc98148a3-kube-api-access-tsn5l\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.604987 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605051 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-audit-policies\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605339 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605377 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605411 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-error\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605547 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605603 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c8e2c68-ae16-478d-bf61-910cc98148a3-audit-dir\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605893 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.605940 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.608276 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.614186 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.615337 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.624303 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.627028 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.630852 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.680574 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706735 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-login\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706790 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsn5l\" (UniqueName: \"kubernetes.io/projected/6c8e2c68-ae16-478d-bf61-910cc98148a3-kube-api-access-tsn5l\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706826 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706859 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-audit-policies\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706889 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706908 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706927 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-error\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706967 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.706995 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c8e2c68-ae16-478d-bf61-910cc98148a3-audit-dir\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.707024 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.707044 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.707083 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.707119 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.707137 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-session\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.708887 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c8e2c68-ae16-478d-bf61-910cc98148a3-audit-dir\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.710545 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.711181 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.711183 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-audit-policies\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.711970 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.715290 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-session\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.715361 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-error\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.715856 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.717806 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.717846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-user-template-login\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.719323 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.725759 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.728632 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c8e2c68-ae16-478d-bf61-910cc98148a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.740280 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsn5l\" (UniqueName: \"kubernetes.io/projected/6c8e2c68-ae16-478d-bf61-910cc98148a3-kube-api-access-tsn5l\") pod \"oauth-openshift-775b8f44b6-x5nrz\" (UID: \"6c8e2c68-ae16-478d-bf61-910cc98148a3\") " pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.757945 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.914877 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.924437 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 14:14:15 crc kubenswrapper[4922]: I0126 14:14:15.947387 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.001622 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.135239 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.169638 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.200931 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.239950 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.304081 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.340884 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.378826 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.400791 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-775b8f44b6-x5nrz"] Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.429907 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.544322 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.591916 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.598158 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.599670 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-775b8f44b6-x5nrz"] Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.604719 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: W0126 14:14:16.611281 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c8e2c68_ae16_478d_bf61_910cc98148a3.slice/crio-e2f997a550ced1b9e56dfafec7fe9c9790fec3231dd79f8d4d6b55499c692140 WatchSource:0}: Error finding container e2f997a550ced1b9e56dfafec7fe9c9790fec3231dd79f8d4d6b55499c692140: Status 404 returned error can't find the container with id e2f997a550ced1b9e56dfafec7fe9c9790fec3231dd79f8d4d6b55499c692140 Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.645539 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.657224 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.679304 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" event={"ID":"6c8e2c68-ae16-478d-bf61-910cc98148a3","Type":"ContainerStarted","Data":"e2f997a550ced1b9e56dfafec7fe9c9790fec3231dd79f8d4d6b55499c692140"} Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.681794 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.846545 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.897854 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.964448 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 14:14:16 crc kubenswrapper[4922]: I0126 14:14:16.971903 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.138808 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.227198 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.248190 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.249169 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.275797 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.355351 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.362206 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.418310 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.423134 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.460174 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.504805 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.618138 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.645018 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.681620 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.681724 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.687762 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" event={"ID":"6c8e2c68-ae16-478d-bf61-910cc98148a3","Type":"ContainerStarted","Data":"61cb253ec5d1e153a988158824cce2411e8ff738f3deaf60411d8b699f4e94d2"} Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.689346 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.700345 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.734384 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-775b8f44b6-x5nrz" podStartSLOduration=47.734362489 podStartE2EDuration="47.734362489s" podCreationTimestamp="2026-01-26 14:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:14:17.73090677 +0000 UTC m=+274.933169552" watchObservedRunningTime="2026-01-26 14:14:17.734362489 +0000 UTC m=+274.936625291" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.771440 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.897371 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 14:14:17 crc kubenswrapper[4922]: I0126 14:14:17.939729 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.009017 4922 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.033834 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.037978 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.097950 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.304670 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.341477 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.344646 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.344865 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.366431 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.382883 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.392151 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.457308 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.482268 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.502939 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.518011 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.622216 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.623370 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.665468 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.675684 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.717101 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.739167 4922 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.797414 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.841815 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.885865 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 14:14:18 crc kubenswrapper[4922]: I0126 14:14:18.965840 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.157903 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.168863 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.178000 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.227205 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.241730 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.376377 4922 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.376765 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc" gracePeriod=5 Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.560981 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.585418 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.849753 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.866614 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 14:14:19 crc kubenswrapper[4922]: I0126 14:14:19.987968 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.043996 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.130762 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.162013 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.196724 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.197237 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.282130 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.303515 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.492377 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.506570 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.779777 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.827517 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.838232 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.923285 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 14:14:20 crc kubenswrapper[4922]: I0126 14:14:20.962091 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.029645 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.056540 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.125731 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.222547 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.469867 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.553717 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.575279 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.582702 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.606816 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.622494 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.703402 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.728847 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 14:14:21 crc kubenswrapper[4922]: I0126 14:14:21.734864 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.164593 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.345323 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.422166 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.692853 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.837017 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.851656 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 14:14:22 crc kubenswrapper[4922]: I0126 14:14:22.936292 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 14:14:23 crc kubenswrapper[4922]: I0126 14:14:23.197464 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 14:14:23 crc kubenswrapper[4922]: I0126 14:14:23.431719 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.122048 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.189023 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.203001 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.499645 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.499724 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.545806 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.545856 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.545881 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.545906 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.545945 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.546248 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.546758 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.546808 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.546836 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.557414 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.647364 4922 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.647409 4922 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.647423 4922 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.647436 4922 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.647448 4922 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.984205 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.984302 4922 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc" exitCode=137 Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.984377 4922 scope.go:117] "RemoveContainer" containerID="5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc" Jan 26 14:14:24 crc kubenswrapper[4922]: I0126 14:14:24.984463 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:14:25 crc kubenswrapper[4922]: I0126 14:14:25.013963 4922 scope.go:117] "RemoveContainer" containerID="5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc" Jan 26 14:14:25 crc kubenswrapper[4922]: E0126 14:14:25.014752 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc\": container with ID starting with 5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc not found: ID does not exist" containerID="5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc" Jan 26 14:14:25 crc kubenswrapper[4922]: I0126 14:14:25.014829 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc"} err="failed to get container status \"5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc\": rpc error: code = NotFound desc = could not find container \"5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc\": container with ID starting with 5149a8e1d2b4a73a689c44406f67e6bc8815390924795824be852a87d89672bc not found: ID does not exist" Jan 26 14:14:25 crc kubenswrapper[4922]: I0126 14:14:25.103716 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.969751 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7xmz"] Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.970528 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l7xmz" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="registry-server" containerID="cri-o://f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322" gracePeriod=30 Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.985697 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2tbdp"] Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.986547 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26rnv"] Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.986618 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2tbdp" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="registry-server" containerID="cri-o://821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3" gracePeriod=30 Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.986757 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" podUID="12e31154-e0cc-4aa6-802b-31590a683866" containerName="marketplace-operator" containerID="cri-o://9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52" gracePeriod=30 Jan 26 14:14:31 crc kubenswrapper[4922]: I0126 14:14:31.991956 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cvbq"] Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.003625 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tj7hb"] Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.003887 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tj7hb" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="registry-server" containerID="cri-o://d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030" gracePeriod=30 Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.022802 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjq29"] Jan 26 14:14:32 crc kubenswrapper[4922]: E0126 14:14:32.023139 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.023165 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.023318 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.023873 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.027954 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5cvbq" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="registry-server" containerID="cri-o://92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c" gracePeriod=30 Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.038821 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjq29"] Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.153779 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrdk\" (UniqueName: \"kubernetes.io/projected/a88a2014-3fba-45e3-bc74-1b2c803c10b5-kube-api-access-tkrdk\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.153892 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a88a2014-3fba-45e3-bc74-1b2c803c10b5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.153940 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a88a2014-3fba-45e3-bc74-1b2c803c10b5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.255818 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a88a2014-3fba-45e3-bc74-1b2c803c10b5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.255900 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a88a2014-3fba-45e3-bc74-1b2c803c10b5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.255966 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrdk\" (UniqueName: \"kubernetes.io/projected/a88a2014-3fba-45e3-bc74-1b2c803c10b5-kube-api-access-tkrdk\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.258487 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a88a2014-3fba-45e3-bc74-1b2c803c10b5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.263771 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a88a2014-3fba-45e3-bc74-1b2c803c10b5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.281802 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrdk\" (UniqueName: \"kubernetes.io/projected/a88a2014-3fba-45e3-bc74-1b2c803c10b5-kube-api-access-tkrdk\") pod \"marketplace-operator-79b997595-tjq29\" (UID: \"a88a2014-3fba-45e3-bc74-1b2c803c10b5\") " pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.427903 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.445412 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.447548 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.459951 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.481078 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.481832 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.561631 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-trusted-ca\") pod \"12e31154-e0cc-4aa6-802b-31590a683866\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.561710 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-catalog-content\") pod \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.561729 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-catalog-content\") pod \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.561762 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-operator-metrics\") pod \"12e31154-e0cc-4aa6-802b-31590a683866\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562012 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-utilities\") pod \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562039 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42w6v\" (UniqueName: \"kubernetes.io/projected/cfcca17c-5b8e-42fa-8fa2-56139592b85b-kube-api-access-42w6v\") pod \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\" (UID: \"cfcca17c-5b8e-42fa-8fa2-56139592b85b\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562144 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-utilities\") pod \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562171 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-utilities\") pod \"95a340dd-cf35-496b-aae2-9190b1b24d2b\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562211 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-catalog-content\") pod \"95a340dd-cf35-496b-aae2-9190b1b24d2b\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562235 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlsl\" (UniqueName: \"kubernetes.io/projected/eda39827-b747-4e2e-9c8c-5f699cdf4a96-kube-api-access-7jlsl\") pod \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\" (UID: \"eda39827-b747-4e2e-9c8c-5f699cdf4a96\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562263 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-utilities\") pod \"18f19460-3c63-42ea-b891-10d9b8a36e2e\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562286 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv28t\" (UniqueName: \"kubernetes.io/projected/12e31154-e0cc-4aa6-802b-31590a683866-kube-api-access-bv28t\") pod \"12e31154-e0cc-4aa6-802b-31590a683866\" (UID: \"12e31154-e0cc-4aa6-802b-31590a683866\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562322 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdqbl\" (UniqueName: \"kubernetes.io/projected/18f19460-3c63-42ea-b891-10d9b8a36e2e-kube-api-access-gdqbl\") pod \"18f19460-3c63-42ea-b891-10d9b8a36e2e\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562366 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-catalog-content\") pod \"18f19460-3c63-42ea-b891-10d9b8a36e2e\" (UID: \"18f19460-3c63-42ea-b891-10d9b8a36e2e\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.562396 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87jz8\" (UniqueName: \"kubernetes.io/projected/95a340dd-cf35-496b-aae2-9190b1b24d2b-kube-api-access-87jz8\") pod \"95a340dd-cf35-496b-aae2-9190b1b24d2b\" (UID: \"95a340dd-cf35-496b-aae2-9190b1b24d2b\") " Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.565347 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "12e31154-e0cc-4aa6-802b-31590a683866" (UID: "12e31154-e0cc-4aa6-802b-31590a683866"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.577391 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda39827-b747-4e2e-9c8c-5f699cdf4a96-kube-api-access-7jlsl" (OuterVolumeSpecName: "kube-api-access-7jlsl") pod "eda39827-b747-4e2e-9c8c-5f699cdf4a96" (UID: "eda39827-b747-4e2e-9c8c-5f699cdf4a96"). InnerVolumeSpecName "kube-api-access-7jlsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.582224 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-utilities" (OuterVolumeSpecName: "utilities") pod "18f19460-3c63-42ea-b891-10d9b8a36e2e" (UID: "18f19460-3c63-42ea-b891-10d9b8a36e2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.587529 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e31154-e0cc-4aa6-802b-31590a683866-kube-api-access-bv28t" (OuterVolumeSpecName: "kube-api-access-bv28t") pod "12e31154-e0cc-4aa6-802b-31590a683866" (UID: "12e31154-e0cc-4aa6-802b-31590a683866"). InnerVolumeSpecName "kube-api-access-bv28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.587669 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-utilities" (OuterVolumeSpecName: "utilities") pod "95a340dd-cf35-496b-aae2-9190b1b24d2b" (UID: "95a340dd-cf35-496b-aae2-9190b1b24d2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.598353 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a340dd-cf35-496b-aae2-9190b1b24d2b-kube-api-access-87jz8" (OuterVolumeSpecName: "kube-api-access-87jz8") pod "95a340dd-cf35-496b-aae2-9190b1b24d2b" (UID: "95a340dd-cf35-496b-aae2-9190b1b24d2b"). InnerVolumeSpecName "kube-api-access-87jz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.600346 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f19460-3c63-42ea-b891-10d9b8a36e2e-kube-api-access-gdqbl" (OuterVolumeSpecName: "kube-api-access-gdqbl") pod "18f19460-3c63-42ea-b891-10d9b8a36e2e" (UID: "18f19460-3c63-42ea-b891-10d9b8a36e2e"). InnerVolumeSpecName "kube-api-access-gdqbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.603453 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfcca17c-5b8e-42fa-8fa2-56139592b85b-kube-api-access-42w6v" (OuterVolumeSpecName: "kube-api-access-42w6v") pod "cfcca17c-5b8e-42fa-8fa2-56139592b85b" (UID: "cfcca17c-5b8e-42fa-8fa2-56139592b85b"). InnerVolumeSpecName "kube-api-access-42w6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.605251 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-utilities" (OuterVolumeSpecName: "utilities") pod "cfcca17c-5b8e-42fa-8fa2-56139592b85b" (UID: "cfcca17c-5b8e-42fa-8fa2-56139592b85b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.621647 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "12e31154-e0cc-4aa6-802b-31590a683866" (UID: "12e31154-e0cc-4aa6-802b-31590a683866"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.637009 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-utilities" (OuterVolumeSpecName: "utilities") pod "eda39827-b747-4e2e-9c8c-5f699cdf4a96" (UID: "eda39827-b747-4e2e-9c8c-5f699cdf4a96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.646972 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95a340dd-cf35-496b-aae2-9190b1b24d2b" (UID: "95a340dd-cf35-496b-aae2-9190b1b24d2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.648755 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18f19460-3c63-42ea-b891-10d9b8a36e2e" (UID: "18f19460-3c63-42ea-b891-10d9b8a36e2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664731 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664790 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42w6v\" (UniqueName: \"kubernetes.io/projected/cfcca17c-5b8e-42fa-8fa2-56139592b85b-kube-api-access-42w6v\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664805 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664816 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664826 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a340dd-cf35-496b-aae2-9190b1b24d2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664836 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlsl\" (UniqueName: \"kubernetes.io/projected/eda39827-b747-4e2e-9c8c-5f699cdf4a96-kube-api-access-7jlsl\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664846 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664856 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv28t\" (UniqueName: \"kubernetes.io/projected/12e31154-e0cc-4aa6-802b-31590a683866-kube-api-access-bv28t\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664868 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdqbl\" (UniqueName: \"kubernetes.io/projected/18f19460-3c63-42ea-b891-10d9b8a36e2e-kube-api-access-gdqbl\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664877 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f19460-3c63-42ea-b891-10d9b8a36e2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664886 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87jz8\" (UniqueName: \"kubernetes.io/projected/95a340dd-cf35-496b-aae2-9190b1b24d2b-kube-api-access-87jz8\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664896 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.664905 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e31154-e0cc-4aa6-802b-31590a683866-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.676913 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda39827-b747-4e2e-9c8c-5f699cdf4a96" (UID: "eda39827-b747-4e2e-9c8c-5f699cdf4a96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.741800 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tjq29"] Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.757310 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfcca17c-5b8e-42fa-8fa2-56139592b85b" (UID: "cfcca17c-5b8e-42fa-8fa2-56139592b85b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.766287 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcca17c-5b8e-42fa-8fa2-56139592b85b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:32 crc kubenswrapper[4922]: I0126 14:14:32.766347 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda39827-b747-4e2e-9c8c-5f699cdf4a96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.036012 4922 generic.go:334] "Generic (PLEG): container finished" podID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerID="821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3" exitCode=0 Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.036103 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tbdp" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.036121 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tbdp" event={"ID":"eda39827-b747-4e2e-9c8c-5f699cdf4a96","Type":"ContainerDied","Data":"821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.037806 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tbdp" event={"ID":"eda39827-b747-4e2e-9c8c-5f699cdf4a96","Type":"ContainerDied","Data":"e9ee5aeff376458f4e744275a44c95fbd165f0cbd1645d8d43d4038f1e7bbb26"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.037862 4922 scope.go:117] "RemoveContainer" containerID="821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.040481 4922 generic.go:334] "Generic (PLEG): container finished" podID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerID="d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030" exitCode=0 Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.040552 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tj7hb" event={"ID":"cfcca17c-5b8e-42fa-8fa2-56139592b85b","Type":"ContainerDied","Data":"d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.040572 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tj7hb" event={"ID":"cfcca17c-5b8e-42fa-8fa2-56139592b85b","Type":"ContainerDied","Data":"637bd2c30f01bafed5a95e0d8c0d3a965c3955af72e68e465a842a5fec773d0a"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.040624 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tj7hb" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.041911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" event={"ID":"a88a2014-3fba-45e3-bc74-1b2c803c10b5","Type":"ContainerStarted","Data":"0295dac6cd4a80d007bd0f05d6b5495ba934d751d9d1443167504a56f50c4db4"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.041946 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" event={"ID":"a88a2014-3fba-45e3-bc74-1b2c803c10b5","Type":"ContainerStarted","Data":"6b66286aefe3ea05b776ddd23453b485afb13976ac26c4439b16ee4169b61dc5"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.042922 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.045425 4922 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tjq29 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.045486 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" podUID="a88a2014-3fba-45e3-bc74-1b2c803c10b5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.048031 4922 generic.go:334] "Generic (PLEG): container finished" podID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerID="f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322" exitCode=0 Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.048094 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerDied","Data":"f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.048464 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7xmz" event={"ID":"18f19460-3c63-42ea-b891-10d9b8a36e2e","Type":"ContainerDied","Data":"b39c69d3e618f43118272112ee39e890ff44679d35471f5b5ce0b2925f85e5be"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.048123 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7xmz" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.052861 4922 scope.go:117] "RemoveContainer" containerID="14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.057664 4922 generic.go:334] "Generic (PLEG): container finished" podID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerID="92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c" exitCode=0 Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.057773 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cvbq" event={"ID":"95a340dd-cf35-496b-aae2-9190b1b24d2b","Type":"ContainerDied","Data":"92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.057876 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cvbq" event={"ID":"95a340dd-cf35-496b-aae2-9190b1b24d2b","Type":"ContainerDied","Data":"5caf5ca5a54d1710abec09c60d52739aa821955ba85ce740bad4bdb3252f3ffe"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.058292 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cvbq" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.060581 4922 generic.go:334] "Generic (PLEG): container finished" podID="12e31154-e0cc-4aa6-802b-31590a683866" containerID="9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52" exitCode=0 Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.060650 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.060668 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" event={"ID":"12e31154-e0cc-4aa6-802b-31590a683866","Type":"ContainerDied","Data":"9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.060719 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-26rnv" event={"ID":"12e31154-e0cc-4aa6-802b-31590a683866","Type":"ContainerDied","Data":"ad0f4cf1b7540d0ffd2f7b3e6f29ba02f2ff9cebcdbdd6473d67363d4668f8fd"} Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.075854 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" podStartSLOduration=2.075824419 podStartE2EDuration="2.075824419s" podCreationTimestamp="2026-01-26 14:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:14:33.066990074 +0000 UTC m=+290.269252886" watchObservedRunningTime="2026-01-26 14:14:33.075824419 +0000 UTC m=+290.278087211" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.077863 4922 scope.go:117] "RemoveContainer" containerID="83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.104917 4922 scope.go:117] "RemoveContainer" containerID="821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.107557 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3\": container with ID starting with 821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3 not found: ID does not exist" containerID="821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.108635 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3"} err="failed to get container status \"821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3\": rpc error: code = NotFound desc = could not find container \"821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3\": container with ID starting with 821031ea371e0b46a4c4804f56f8dfe84d7c9f32a4a56bb5e1197f1fdbb78fd3 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.109441 4922 scope.go:117] "RemoveContainer" containerID="14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.110708 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a\": container with ID starting with 14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a not found: ID does not exist" containerID="14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.110783 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a"} err="failed to get container status \"14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a\": rpc error: code = NotFound desc = could not find container \"14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a\": container with ID starting with 14088bcdd1ce6f34a55208541c88f0a6d4c61beda9e9b493ea5175a168b6954a not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.110825 4922 scope.go:117] "RemoveContainer" containerID="83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.113344 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23\": container with ID starting with 83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23 not found: ID does not exist" containerID="83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.113383 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23"} err="failed to get container status \"83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23\": rpc error: code = NotFound desc = could not find container \"83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23\": container with ID starting with 83c8bf3efeda25b736577f53534a778b1508f85208271c0ed7d8f4bf30acaf23 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.113411 4922 scope.go:117] "RemoveContainer" containerID="d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.125553 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7xmz"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.129128 4922 scope.go:117] "RemoveContainer" containerID="7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.134175 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l7xmz"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.146531 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26rnv"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.148473 4922 scope.go:117] "RemoveContainer" containerID="90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.153715 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-26rnv"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.169028 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2tbdp"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.176794 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2tbdp"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.185330 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tj7hb"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.188672 4922 scope.go:117] "RemoveContainer" containerID="d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.189986 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030\": container with ID starting with d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030 not found: ID does not exist" containerID="d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.190056 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030"} err="failed to get container status \"d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030\": rpc error: code = NotFound desc = could not find container \"d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030\": container with ID starting with d91a1c7c0dfc58996aa22b93571b24b0fe12a87e1284ba3f29dda970601d4030 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.190102 4922 scope.go:117] "RemoveContainer" containerID="7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.190201 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tj7hb"] Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.190437 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762\": container with ID starting with 7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762 not found: ID does not exist" containerID="7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.190486 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762"} err="failed to get container status \"7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762\": rpc error: code = NotFound desc = could not find container \"7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762\": container with ID starting with 7e25a9dc7696abad179c36249e924b2ed41078e7b03ff218231495e3c16da762 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.190511 4922 scope.go:117] "RemoveContainer" containerID="90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.192370 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0\": container with ID starting with 90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0 not found: ID does not exist" containerID="90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.192456 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0"} err="failed to get container status \"90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0\": rpc error: code = NotFound desc = could not find container \"90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0\": container with ID starting with 90ecc613fc6e108adf01743bdd4f65fe6cc8fcd75de0d9567ebd8ee936be16f0 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.192476 4922 scope.go:117] "RemoveContainer" containerID="f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.197217 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cvbq"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.201780 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cvbq"] Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.207578 4922 scope.go:117] "RemoveContainer" containerID="73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.235009 4922 scope.go:117] "RemoveContainer" containerID="7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.251792 4922 scope.go:117] "RemoveContainer" containerID="f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.252273 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322\": container with ID starting with f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322 not found: ID does not exist" containerID="f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.252306 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322"} err="failed to get container status \"f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322\": rpc error: code = NotFound desc = could not find container \"f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322\": container with ID starting with f65a93d654e8139c61c621bbbd72043ff16e8726f242c665fd1bf8fad0290322 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.252330 4922 scope.go:117] "RemoveContainer" containerID="73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.252616 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750\": container with ID starting with 73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750 not found: ID does not exist" containerID="73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.252647 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750"} err="failed to get container status \"73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750\": rpc error: code = NotFound desc = could not find container \"73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750\": container with ID starting with 73422da4b8044388c7454927d02bee2846ae51d2a518bf92b27b76dd51180750 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.252660 4922 scope.go:117] "RemoveContainer" containerID="7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.252864 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a\": container with ID starting with 7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a not found: ID does not exist" containerID="7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.252885 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a"} err="failed to get container status \"7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a\": rpc error: code = NotFound desc = could not find container \"7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a\": container with ID starting with 7aae1b99a77d490d42f681dfe7a4aff42b9f5f9efb7305dec3300936f92fe90a not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.252898 4922 scope.go:117] "RemoveContainer" containerID="92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.276339 4922 scope.go:117] "RemoveContainer" containerID="52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.291749 4922 scope.go:117] "RemoveContainer" containerID="ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.317652 4922 scope.go:117] "RemoveContainer" containerID="92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.319480 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c\": container with ID starting with 92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c not found: ID does not exist" containerID="92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.319530 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c"} err="failed to get container status \"92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c\": rpc error: code = NotFound desc = could not find container \"92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c\": container with ID starting with 92d88b1a18eb719f6e7072d68ca12a3d12a3e9797617c3a559e62a6a08b1cc3c not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.319565 4922 scope.go:117] "RemoveContainer" containerID="52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.320518 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68\": container with ID starting with 52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68 not found: ID does not exist" containerID="52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.320560 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68"} err="failed to get container status \"52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68\": rpc error: code = NotFound desc = could not find container \"52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68\": container with ID starting with 52d93d1f627820640942ec89318c1c90d1e19ac731195424e0065d8a6c170f68 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.320585 4922 scope.go:117] "RemoveContainer" containerID="ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.320983 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387\": container with ID starting with ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387 not found: ID does not exist" containerID="ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.321043 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387"} err="failed to get container status \"ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387\": rpc error: code = NotFound desc = could not find container \"ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387\": container with ID starting with ba68062fe94f5441d609ea9292b2fce0fc92b658bf34184625517b391dd43387 not found: ID does not exist" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.321082 4922 scope.go:117] "RemoveContainer" containerID="9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.339233 4922 scope.go:117] "RemoveContainer" containerID="9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52" Jan 26 14:14:33 crc kubenswrapper[4922]: E0126 14:14:33.339797 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52\": container with ID starting with 9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52 not found: ID does not exist" containerID="9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52" Jan 26 14:14:33 crc kubenswrapper[4922]: I0126 14:14:33.339840 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52"} err="failed to get container status \"9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52\": rpc error: code = NotFound desc = could not find container \"9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52\": container with ID starting with 9d03b4bb59edbd564cea686db8a3f0cf7d78a4fac23993a970b495799f0bba52 not found: ID does not exist" Jan 26 14:14:34 crc kubenswrapper[4922]: I0126 14:14:34.081581 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tjq29" Jan 26 14:14:35 crc kubenswrapper[4922]: I0126 14:14:35.103207 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e31154-e0cc-4aa6-802b-31590a683866" path="/var/lib/kubelet/pods/12e31154-e0cc-4aa6-802b-31590a683866/volumes" Jan 26 14:14:35 crc kubenswrapper[4922]: I0126 14:14:35.104959 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" path="/var/lib/kubelet/pods/18f19460-3c63-42ea-b891-10d9b8a36e2e/volumes" Jan 26 14:14:35 crc kubenswrapper[4922]: I0126 14:14:35.106364 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" path="/var/lib/kubelet/pods/95a340dd-cf35-496b-aae2-9190b1b24d2b/volumes" Jan 26 14:14:35 crc kubenswrapper[4922]: I0126 14:14:35.108530 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" path="/var/lib/kubelet/pods/cfcca17c-5b8e-42fa-8fa2-56139592b85b/volumes" Jan 26 14:14:35 crc kubenswrapper[4922]: I0126 14:14:35.109697 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" path="/var/lib/kubelet/pods/eda39827-b747-4e2e-9c8c-5f699cdf4a96/volumes" Jan 26 14:14:42 crc kubenswrapper[4922]: I0126 14:14:42.934404 4922 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 14:14:45 crc kubenswrapper[4922]: I0126 14:14:45.585504 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jsdpn"] Jan 26 14:14:45 crc kubenswrapper[4922]: I0126 14:14:45.586422 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" podUID="7bab675e-e24a-43aa-abdd-0e657671535d" containerName="controller-manager" containerID="cri-o://c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345" gracePeriod=30 Jan 26 14:14:45 crc kubenswrapper[4922]: I0126 14:14:45.695813 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb"] Jan 26 14:14:45 crc kubenswrapper[4922]: I0126 14:14:45.696014 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" podUID="43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" containerName="route-controller-manager" containerID="cri-o://1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249" gracePeriod=30 Jan 26 14:14:45 crc kubenswrapper[4922]: I0126 14:14:45.970608 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.034360 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.154452 4922 generic.go:334] "Generic (PLEG): container finished" podID="43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" containerID="1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249" exitCode=0 Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.154525 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" event={"ID":"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3","Type":"ContainerDied","Data":"1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249"} Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.154553 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" event={"ID":"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3","Type":"ContainerDied","Data":"57bf27040dc5d0c1e76f4fad3fa7c91b2872f0edad397962c90ec3fa2b7de4bd"} Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.154571 4922 scope.go:117] "RemoveContainer" containerID="1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.154677 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156421 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-client-ca\") pod \"7bab675e-e24a-43aa-abdd-0e657671535d\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156480 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-config\") pod \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156536 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-proxy-ca-bundles\") pod \"7bab675e-e24a-43aa-abdd-0e657671535d\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156585 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5785\" (UniqueName: \"kubernetes.io/projected/7bab675e-e24a-43aa-abdd-0e657671535d-kube-api-access-q5785\") pod \"7bab675e-e24a-43aa-abdd-0e657671535d\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156656 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2lmx\" (UniqueName: \"kubernetes.io/projected/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-kube-api-access-t2lmx\") pod \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156728 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-config\") pod \"7bab675e-e24a-43aa-abdd-0e657671535d\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156783 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-serving-cert\") pod \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156829 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bab675e-e24a-43aa-abdd-0e657671535d-serving-cert\") pod \"7bab675e-e24a-43aa-abdd-0e657671535d\" (UID: \"7bab675e-e24a-43aa-abdd-0e657671535d\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.156866 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-client-ca\") pod \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\" (UID: \"43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3\") " Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.157784 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-config" (OuterVolumeSpecName: "config") pod "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" (UID: "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.158108 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-config" (OuterVolumeSpecName: "config") pod "7bab675e-e24a-43aa-abdd-0e657671535d" (UID: "7bab675e-e24a-43aa-abdd-0e657671535d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.158162 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-client-ca" (OuterVolumeSpecName: "client-ca") pod "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" (UID: "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.158359 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7bab675e-e24a-43aa-abdd-0e657671535d" (UID: "7bab675e-e24a-43aa-abdd-0e657671535d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.158543 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-client-ca" (OuterVolumeSpecName: "client-ca") pod "7bab675e-e24a-43aa-abdd-0e657671535d" (UID: "7bab675e-e24a-43aa-abdd-0e657671535d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.159786 4922 generic.go:334] "Generic (PLEG): container finished" podID="7bab675e-e24a-43aa-abdd-0e657671535d" containerID="c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345" exitCode=0 Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.159828 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" event={"ID":"7bab675e-e24a-43aa-abdd-0e657671535d","Type":"ContainerDied","Data":"c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345"} Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.159857 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" event={"ID":"7bab675e-e24a-43aa-abdd-0e657671535d","Type":"ContainerDied","Data":"80eb6e9955cdad4c49fa12d7931b2eead9c552f6de198bdbbe2cc52203d77687"} Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.159885 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jsdpn" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.164322 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bab675e-e24a-43aa-abdd-0e657671535d-kube-api-access-q5785" (OuterVolumeSpecName: "kube-api-access-q5785") pod "7bab675e-e24a-43aa-abdd-0e657671535d" (UID: "7bab675e-e24a-43aa-abdd-0e657671535d"). InnerVolumeSpecName "kube-api-access-q5785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.164927 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" (UID: "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.164981 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bab675e-e24a-43aa-abdd-0e657671535d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7bab675e-e24a-43aa-abdd-0e657671535d" (UID: "7bab675e-e24a-43aa-abdd-0e657671535d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.167092 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-kube-api-access-t2lmx" (OuterVolumeSpecName: "kube-api-access-t2lmx") pod "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" (UID: "43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3"). InnerVolumeSpecName "kube-api-access-t2lmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.178160 4922 scope.go:117] "RemoveContainer" containerID="1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249" Jan 26 14:14:46 crc kubenswrapper[4922]: E0126 14:14:46.178733 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249\": container with ID starting with 1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249 not found: ID does not exist" containerID="1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.178813 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249"} err="failed to get container status \"1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249\": rpc error: code = NotFound desc = could not find container \"1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249\": container with ID starting with 1b9854c60aea1983770c538d07b82199f34a173e3b11e136810671798c565249 not found: ID does not exist" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.178853 4922 scope.go:117] "RemoveContainer" containerID="c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.197318 4922 scope.go:117] "RemoveContainer" containerID="c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345" Jan 26 14:14:46 crc kubenswrapper[4922]: E0126 14:14:46.199559 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345\": container with ID starting with c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345 not found: ID does not exist" containerID="c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.199687 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345"} err="failed to get container status \"c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345\": rpc error: code = NotFound desc = could not find container \"c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345\": container with ID starting with c17bb01dc75fda1b5d367fc52b27470a894c7db9005d6c5d003a49cb8de62345 not found: ID does not exist" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.258886 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259127 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259151 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259194 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5785\" (UniqueName: \"kubernetes.io/projected/7bab675e-e24a-43aa-abdd-0e657671535d-kube-api-access-q5785\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259232 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2lmx\" (UniqueName: \"kubernetes.io/projected/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-kube-api-access-t2lmx\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259349 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bab675e-e24a-43aa-abdd-0e657671535d-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259362 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259375 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bab675e-e24a-43aa-abdd-0e657671535d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.259388 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.493990 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb"] Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.497038 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ft6sb"] Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.513400 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jsdpn"] Jan 26 14:14:46 crc kubenswrapper[4922]: I0126 14:14:46.518354 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jsdpn"] Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.101567 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" path="/var/lib/kubelet/pods/43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3/volumes" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.102559 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bab675e-e24a-43aa-abdd-0e657671535d" path="/var/lib/kubelet/pods/7bab675e-e24a-43aa-abdd-0e657671535d/volumes" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616376 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86557f56b-mfhql"] Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616655 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616670 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616683 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616691 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616704 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616723 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616745 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" containerName="route-controller-manager" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616753 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" containerName="route-controller-manager" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616771 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616780 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616793 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616801 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616810 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616819 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616831 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616839 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616852 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616859 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616871 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bab675e-e24a-43aa-abdd-0e657671535d" containerName="controller-manager" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616879 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bab675e-e24a-43aa-abdd-0e657671535d" containerName="controller-manager" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616893 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616901 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616916 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616924 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616936 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616946 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="extract-content" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616957 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616965 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="extract-utilities" Jan 26 14:14:47 crc kubenswrapper[4922]: E0126 14:14:47.616975 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e31154-e0cc-4aa6-802b-31590a683866" containerName="marketplace-operator" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.616983 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e31154-e0cc-4aa6-802b-31590a683866" containerName="marketplace-operator" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617105 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e31154-e0cc-4aa6-802b-31590a683866" containerName="marketplace-operator" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617121 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfcca17c-5b8e-42fa-8fa2-56139592b85b" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617133 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e3ab3b-9e40-4388-a3d6-a3774fdc6bf3" containerName="route-controller-manager" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617151 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f19460-3c63-42ea-b891-10d9b8a36e2e" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617162 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda39827-b747-4e2e-9c8c-5f699cdf4a96" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617174 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a340dd-cf35-496b-aae2-9190b1b24d2b" containerName="registry-server" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617185 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bab675e-e24a-43aa-abdd-0e657671535d" containerName="controller-manager" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.617621 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.622399 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.622911 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.623281 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.623646 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.623748 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.623798 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.635713 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.643456 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm"] Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.644604 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.648529 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.648553 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.648941 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.649273 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.648759 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.649869 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.660027 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm"] Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.663979 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86557f56b-mfhql"] Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn2m\" (UniqueName: \"kubernetes.io/projected/debd57b2-4a57-4f68-9409-33ba951c68bd-kube-api-access-5wn2m\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777202 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwnf\" (UniqueName: \"kubernetes.io/projected/e37eae2d-c374-4ded-996a-caca11428801-kube-api-access-zvwnf\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777234 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-config\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777273 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-proxy-ca-bundles\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777304 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-config\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777574 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debd57b2-4a57-4f68-9409-33ba951c68bd-serving-cert\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777610 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-client-ca\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777631 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37eae2d-c374-4ded-996a-caca11428801-serving-cert\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.777652 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-client-ca\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.879659 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debd57b2-4a57-4f68-9409-33ba951c68bd-serving-cert\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880353 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-client-ca\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880403 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37eae2d-c374-4ded-996a-caca11428801-serving-cert\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880439 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-client-ca\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wn2m\" (UniqueName: \"kubernetes.io/projected/debd57b2-4a57-4f68-9409-33ba951c68bd-kube-api-access-5wn2m\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880578 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwnf\" (UniqueName: \"kubernetes.io/projected/e37eae2d-c374-4ded-996a-caca11428801-kube-api-access-zvwnf\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880617 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-config\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880670 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-config\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.880713 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-proxy-ca-bundles\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.882593 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-client-ca\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.883009 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-client-ca\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.883608 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-config\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.883818 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-proxy-ca-bundles\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.883832 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e37eae2d-c374-4ded-996a-caca11428801-config\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.890156 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debd57b2-4a57-4f68-9409-33ba951c68bd-serving-cert\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.891794 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e37eae2d-c374-4ded-996a-caca11428801-serving-cert\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.903662 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwnf\" (UniqueName: \"kubernetes.io/projected/e37eae2d-c374-4ded-996a-caca11428801-kube-api-access-zvwnf\") pod \"controller-manager-86557f56b-mfhql\" (UID: \"e37eae2d-c374-4ded-996a-caca11428801\") " pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.908356 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wn2m\" (UniqueName: \"kubernetes.io/projected/debd57b2-4a57-4f68-9409-33ba951c68bd-kube-api-access-5wn2m\") pod \"route-controller-manager-774665f746-9ktpm\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.953703 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:47 crc kubenswrapper[4922]: I0126 14:14:47.972055 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:48 crc kubenswrapper[4922]: I0126 14:14:48.202356 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm"] Jan 26 14:14:48 crc kubenswrapper[4922]: I0126 14:14:48.250212 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86557f56b-mfhql"] Jan 26 14:14:48 crc kubenswrapper[4922]: W0126 14:14:48.251762 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode37eae2d_c374_4ded_996a_caca11428801.slice/crio-c754c848f3c364823841511e4ae6a2e8ff8e52eab73222b37b223c16f5bed7fc WatchSource:0}: Error finding container c754c848f3c364823841511e4ae6a2e8ff8e52eab73222b37b223c16f5bed7fc: Status 404 returned error can't find the container with id c754c848f3c364823841511e4ae6a2e8ff8e52eab73222b37b223c16f5bed7fc Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.192722 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" event={"ID":"debd57b2-4a57-4f68-9409-33ba951c68bd","Type":"ContainerStarted","Data":"9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6"} Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.192818 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" event={"ID":"debd57b2-4a57-4f68-9409-33ba951c68bd","Type":"ContainerStarted","Data":"1f471eb5ff939cbb022593eb01dc542370574b3663fea67bdc97605e14cdb7b7"} Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.193023 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.195132 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" event={"ID":"e37eae2d-c374-4ded-996a-caca11428801","Type":"ContainerStarted","Data":"eb8d7fd9e3cf4f61254730fe67f9e28126751db602ee662d1cf506d9e031f878"} Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.195182 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" event={"ID":"e37eae2d-c374-4ded-996a-caca11428801","Type":"ContainerStarted","Data":"c754c848f3c364823841511e4ae6a2e8ff8e52eab73222b37b223c16f5bed7fc"} Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.195446 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.204797 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.208423 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.233743 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" podStartSLOduration=4.233714011 podStartE2EDuration="4.233714011s" podCreationTimestamp="2026-01-26 14:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:14:49.227839221 +0000 UTC m=+306.430101993" watchObservedRunningTime="2026-01-26 14:14:49.233714011 +0000 UTC m=+306.435976823" Jan 26 14:14:49 crc kubenswrapper[4922]: I0126 14:14:49.263464 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86557f56b-mfhql" podStartSLOduration=4.263417971 podStartE2EDuration="4.263417971s" podCreationTimestamp="2026-01-26 14:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:14:49.244806872 +0000 UTC m=+306.447069644" watchObservedRunningTime="2026-01-26 14:14:49.263417971 +0000 UTC m=+306.465680783" Jan 26 14:14:50 crc kubenswrapper[4922]: I0126 14:14:50.255132 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm"] Jan 26 14:14:52 crc kubenswrapper[4922]: I0126 14:14:52.211342 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" podUID="debd57b2-4a57-4f68-9409-33ba951c68bd" containerName="route-controller-manager" containerID="cri-o://9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6" gracePeriod=30 Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.174277 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.217796 4922 generic.go:334] "Generic (PLEG): container finished" podID="debd57b2-4a57-4f68-9409-33ba951c68bd" containerID="9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6" exitCode=0 Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.217868 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" event={"ID":"debd57b2-4a57-4f68-9409-33ba951c68bd","Type":"ContainerDied","Data":"9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6"} Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.217921 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" event={"ID":"debd57b2-4a57-4f68-9409-33ba951c68bd","Type":"ContainerDied","Data":"1f471eb5ff939cbb022593eb01dc542370574b3663fea67bdc97605e14cdb7b7"} Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.217951 4922 scope.go:117] "RemoveContainer" containerID="9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.218663 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.220143 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9"] Jan 26 14:14:53 crc kubenswrapper[4922]: E0126 14:14:53.220419 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debd57b2-4a57-4f68-9409-33ba951c68bd" containerName="route-controller-manager" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.220442 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="debd57b2-4a57-4f68-9409-33ba951c68bd" containerName="route-controller-manager" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.220591 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="debd57b2-4a57-4f68-9409-33ba951c68bd" containerName="route-controller-manager" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.221088 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.231099 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9"] Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.245414 4922 scope.go:117] "RemoveContainer" containerID="9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6" Jan 26 14:14:53 crc kubenswrapper[4922]: E0126 14:14:53.245951 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6\": container with ID starting with 9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6 not found: ID does not exist" containerID="9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.246057 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6"} err="failed to get container status \"9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6\": rpc error: code = NotFound desc = could not find container \"9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6\": container with ID starting with 9c56f634584e2c170d7188a9fb42b78cf268939850c575ddcd5e01e86b4b39a6 not found: ID does not exist" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.359735 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-config\") pod \"debd57b2-4a57-4f68-9409-33ba951c68bd\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.360131 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debd57b2-4a57-4f68-9409-33ba951c68bd-serving-cert\") pod \"debd57b2-4a57-4f68-9409-33ba951c68bd\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.360248 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-client-ca\") pod \"debd57b2-4a57-4f68-9409-33ba951c68bd\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.360281 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wn2m\" (UniqueName: \"kubernetes.io/projected/debd57b2-4a57-4f68-9409-33ba951c68bd-kube-api-access-5wn2m\") pod \"debd57b2-4a57-4f68-9409-33ba951c68bd\" (UID: \"debd57b2-4a57-4f68-9409-33ba951c68bd\") " Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361006 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "debd57b2-4a57-4f68-9409-33ba951c68bd" (UID: "debd57b2-4a57-4f68-9409-33ba951c68bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361111 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-config\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzxsw\" (UniqueName: \"kubernetes.io/projected/c695d644-6e66-4be0-972e-826a13ec0756-kube-api-access-zzxsw\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361344 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c695d644-6e66-4be0-972e-826a13ec0756-serving-cert\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361390 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-client-ca\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361513 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.361637 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-config" (OuterVolumeSpecName: "config") pod "debd57b2-4a57-4f68-9409-33ba951c68bd" (UID: "debd57b2-4a57-4f68-9409-33ba951c68bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.365859 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debd57b2-4a57-4f68-9409-33ba951c68bd-kube-api-access-5wn2m" (OuterVolumeSpecName: "kube-api-access-5wn2m") pod "debd57b2-4a57-4f68-9409-33ba951c68bd" (UID: "debd57b2-4a57-4f68-9409-33ba951c68bd"). InnerVolumeSpecName "kube-api-access-5wn2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.365860 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debd57b2-4a57-4f68-9409-33ba951c68bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "debd57b2-4a57-4f68-9409-33ba951c68bd" (UID: "debd57b2-4a57-4f68-9409-33ba951c68bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.462905 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzxsw\" (UniqueName: \"kubernetes.io/projected/c695d644-6e66-4be0-972e-826a13ec0756-kube-api-access-zzxsw\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.462966 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c695d644-6e66-4be0-972e-826a13ec0756-serving-cert\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.463003 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-client-ca\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.463030 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-config\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.463095 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debd57b2-4a57-4f68-9409-33ba951c68bd-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.463107 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debd57b2-4a57-4f68-9409-33ba951c68bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.463118 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wn2m\" (UniqueName: \"kubernetes.io/projected/debd57b2-4a57-4f68-9409-33ba951c68bd-kube-api-access-5wn2m\") on node \"crc\" DevicePath \"\"" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.464252 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-config\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.464304 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-client-ca\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.467976 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c695d644-6e66-4be0-972e-826a13ec0756-serving-cert\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.479733 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzxsw\" (UniqueName: \"kubernetes.io/projected/c695d644-6e66-4be0-972e-826a13ec0756-kube-api-access-zzxsw\") pod \"route-controller-manager-77f45d8f46-tsjj9\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.542225 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.560168 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm"] Jan 26 14:14:53 crc kubenswrapper[4922]: I0126 14:14:53.564192 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-9ktpm"] Jan 26 14:14:54 crc kubenswrapper[4922]: I0126 14:14:54.010026 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9"] Jan 26 14:14:54 crc kubenswrapper[4922]: W0126 14:14:54.019258 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc695d644_6e66_4be0_972e_826a13ec0756.slice/crio-46d198b5db185b527eab470b6f767b18a734ff073e10b627e50b36e2a0080c70 WatchSource:0}: Error finding container 46d198b5db185b527eab470b6f767b18a734ff073e10b627e50b36e2a0080c70: Status 404 returned error can't find the container with id 46d198b5db185b527eab470b6f767b18a734ff073e10b627e50b36e2a0080c70 Jan 26 14:14:54 crc kubenswrapper[4922]: I0126 14:14:54.228288 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" event={"ID":"c695d644-6e66-4be0-972e-826a13ec0756","Type":"ContainerStarted","Data":"314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877"} Jan 26 14:14:54 crc kubenswrapper[4922]: I0126 14:14:54.228357 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" event={"ID":"c695d644-6e66-4be0-972e-826a13ec0756","Type":"ContainerStarted","Data":"46d198b5db185b527eab470b6f767b18a734ff073e10b627e50b36e2a0080c70"} Jan 26 14:14:54 crc kubenswrapper[4922]: I0126 14:14:54.228632 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:54 crc kubenswrapper[4922]: I0126 14:14:54.618011 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:14:54 crc kubenswrapper[4922]: I0126 14:14:54.637282 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" podStartSLOduration=4.637263814 podStartE2EDuration="4.637263814s" podCreationTimestamp="2026-01-26 14:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:14:54.257450301 +0000 UTC m=+311.459713093" watchObservedRunningTime="2026-01-26 14:14:54.637263814 +0000 UTC m=+311.839526576" Jan 26 14:14:55 crc kubenswrapper[4922]: I0126 14:14:55.098515 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debd57b2-4a57-4f68-9409-33ba951c68bd" path="/var/lib/kubelet/pods/debd57b2-4a57-4f68-9409-33ba951c68bd/volumes" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.175308 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5"] Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.176287 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.178513 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.178629 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.182814 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5"] Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.250853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04b5280f-d26b-4f56-bf87-b83f6e51ee10-config-volume\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.250945 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mndn\" (UniqueName: \"kubernetes.io/projected/04b5280f-d26b-4f56-bf87-b83f6e51ee10-kube-api-access-7mndn\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.251091 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04b5280f-d26b-4f56-bf87-b83f6e51ee10-secret-volume\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.351954 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04b5280f-d26b-4f56-bf87-b83f6e51ee10-secret-volume\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.352028 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04b5280f-d26b-4f56-bf87-b83f6e51ee10-config-volume\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.352058 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mndn\" (UniqueName: \"kubernetes.io/projected/04b5280f-d26b-4f56-bf87-b83f6e51ee10-kube-api-access-7mndn\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.352848 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04b5280f-d26b-4f56-bf87-b83f6e51ee10-config-volume\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.362711 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04b5280f-d26b-4f56-bf87-b83f6e51ee10-secret-volume\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.371817 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mndn\" (UniqueName: \"kubernetes.io/projected/04b5280f-d26b-4f56-bf87-b83f6e51ee10-kube-api-access-7mndn\") pod \"collect-profiles-29490615-48gx5\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.493815 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:00 crc kubenswrapper[4922]: I0126 14:15:00.902413 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5"] Jan 26 14:15:01 crc kubenswrapper[4922]: I0126 14:15:01.268412 4922 generic.go:334] "Generic (PLEG): container finished" podID="04b5280f-d26b-4f56-bf87-b83f6e51ee10" containerID="d416e50c2895f8a76223641ccdab2eb2bbd14d00ea82b5172b8c0ae2557fb5c1" exitCode=0 Jan 26 14:15:01 crc kubenswrapper[4922]: I0126 14:15:01.268528 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" event={"ID":"04b5280f-d26b-4f56-bf87-b83f6e51ee10","Type":"ContainerDied","Data":"d416e50c2895f8a76223641ccdab2eb2bbd14d00ea82b5172b8c0ae2557fb5c1"} Jan 26 14:15:01 crc kubenswrapper[4922]: I0126 14:15:01.270128 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" event={"ID":"04b5280f-d26b-4f56-bf87-b83f6e51ee10","Type":"ContainerStarted","Data":"64b2be8af31cabe3e447b91ad6ea6884bfc9a2fc8e1844054e50cf9a42cb6527"} Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.650745 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.792575 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mndn\" (UniqueName: \"kubernetes.io/projected/04b5280f-d26b-4f56-bf87-b83f6e51ee10-kube-api-access-7mndn\") pod \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.792753 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04b5280f-d26b-4f56-bf87-b83f6e51ee10-config-volume\") pod \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.792820 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04b5280f-d26b-4f56-bf87-b83f6e51ee10-secret-volume\") pod \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\" (UID: \"04b5280f-d26b-4f56-bf87-b83f6e51ee10\") " Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.793806 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b5280f-d26b-4f56-bf87-b83f6e51ee10-config-volume" (OuterVolumeSpecName: "config-volume") pod "04b5280f-d26b-4f56-bf87-b83f6e51ee10" (UID: "04b5280f-d26b-4f56-bf87-b83f6e51ee10"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.801655 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b5280f-d26b-4f56-bf87-b83f6e51ee10-kube-api-access-7mndn" (OuterVolumeSpecName: "kube-api-access-7mndn") pod "04b5280f-d26b-4f56-bf87-b83f6e51ee10" (UID: "04b5280f-d26b-4f56-bf87-b83f6e51ee10"). InnerVolumeSpecName "kube-api-access-7mndn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.801864 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b5280f-d26b-4f56-bf87-b83f6e51ee10-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04b5280f-d26b-4f56-bf87-b83f6e51ee10" (UID: "04b5280f-d26b-4f56-bf87-b83f6e51ee10"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.894292 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04b5280f-d26b-4f56-bf87-b83f6e51ee10-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.894357 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04b5280f-d26b-4f56-bf87-b83f6e51ee10-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:02 crc kubenswrapper[4922]: I0126 14:15:02.894382 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mndn\" (UniqueName: \"kubernetes.io/projected/04b5280f-d26b-4f56-bf87-b83f6e51ee10-kube-api-access-7mndn\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:03 crc kubenswrapper[4922]: I0126 14:15:03.292024 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" event={"ID":"04b5280f-d26b-4f56-bf87-b83f6e51ee10","Type":"ContainerDied","Data":"64b2be8af31cabe3e447b91ad6ea6884bfc9a2fc8e1844054e50cf9a42cb6527"} Jan 26 14:15:03 crc kubenswrapper[4922]: I0126 14:15:03.292445 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b2be8af31cabe3e447b91ad6ea6884bfc9a2fc8e1844054e50cf9a42cb6527" Jan 26 14:15:03 crc kubenswrapper[4922]: I0126 14:15:03.292151 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5" Jan 26 14:15:05 crc kubenswrapper[4922]: I0126 14:15:05.592788 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9"] Jan 26 14:15:05 crc kubenswrapper[4922]: I0126 14:15:05.593643 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" podUID="c695d644-6e66-4be0-972e-826a13ec0756" containerName="route-controller-manager" containerID="cri-o://314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877" gracePeriod=30 Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.002497 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.141109 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzxsw\" (UniqueName: \"kubernetes.io/projected/c695d644-6e66-4be0-972e-826a13ec0756-kube-api-access-zzxsw\") pod \"c695d644-6e66-4be0-972e-826a13ec0756\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.141275 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-client-ca\") pod \"c695d644-6e66-4be0-972e-826a13ec0756\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.141387 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c695d644-6e66-4be0-972e-826a13ec0756-serving-cert\") pod \"c695d644-6e66-4be0-972e-826a13ec0756\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.141426 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-config\") pod \"c695d644-6e66-4be0-972e-826a13ec0756\" (UID: \"c695d644-6e66-4be0-972e-826a13ec0756\") " Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.142532 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-client-ca" (OuterVolumeSpecName: "client-ca") pod "c695d644-6e66-4be0-972e-826a13ec0756" (UID: "c695d644-6e66-4be0-972e-826a13ec0756"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.142544 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-config" (OuterVolumeSpecName: "config") pod "c695d644-6e66-4be0-972e-826a13ec0756" (UID: "c695d644-6e66-4be0-972e-826a13ec0756"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.147213 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c695d644-6e66-4be0-972e-826a13ec0756-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c695d644-6e66-4be0-972e-826a13ec0756" (UID: "c695d644-6e66-4be0-972e-826a13ec0756"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.147324 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c695d644-6e66-4be0-972e-826a13ec0756-kube-api-access-zzxsw" (OuterVolumeSpecName: "kube-api-access-zzxsw") pod "c695d644-6e66-4be0-972e-826a13ec0756" (UID: "c695d644-6e66-4be0-972e-826a13ec0756"). InnerVolumeSpecName "kube-api-access-zzxsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.243001 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c695d644-6e66-4be0-972e-826a13ec0756-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.243058 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.243109 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzxsw\" (UniqueName: \"kubernetes.io/projected/c695d644-6e66-4be0-972e-826a13ec0756-kube-api-access-zzxsw\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.243131 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c695d644-6e66-4be0-972e-826a13ec0756-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.314038 4922 generic.go:334] "Generic (PLEG): container finished" podID="c695d644-6e66-4be0-972e-826a13ec0756" containerID="314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877" exitCode=0 Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.314218 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.314228 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" event={"ID":"c695d644-6e66-4be0-972e-826a13ec0756","Type":"ContainerDied","Data":"314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877"} Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.314901 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9" event={"ID":"c695d644-6e66-4be0-972e-826a13ec0756","Type":"ContainerDied","Data":"46d198b5db185b527eab470b6f767b18a734ff073e10b627e50b36e2a0080c70"} Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.314938 4922 scope.go:117] "RemoveContainer" containerID="314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.342369 4922 scope.go:117] "RemoveContainer" containerID="314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877" Jan 26 14:15:06 crc kubenswrapper[4922]: E0126 14:15:06.343034 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877\": container with ID starting with 314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877 not found: ID does not exist" containerID="314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.343167 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877"} err="failed to get container status \"314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877\": rpc error: code = NotFound desc = could not find container \"314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877\": container with ID starting with 314a4e2d0c48e92a0213f2cbfaf14feaf9b670171389bbcbe8ea2fc40b5ca877 not found: ID does not exist" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.366467 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9"] Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.372568 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-tsjj9"] Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.642854 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-88rsv"] Jan 26 14:15:06 crc kubenswrapper[4922]: E0126 14:15:06.643361 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b5280f-d26b-4f56-bf87-b83f6e51ee10" containerName="collect-profiles" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.643394 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b5280f-d26b-4f56-bf87-b83f6e51ee10" containerName="collect-profiles" Jan 26 14:15:06 crc kubenswrapper[4922]: E0126 14:15:06.643437 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c695d644-6e66-4be0-972e-826a13ec0756" containerName="route-controller-manager" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.643457 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c695d644-6e66-4be0-972e-826a13ec0756" containerName="route-controller-manager" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.643716 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b5280f-d26b-4f56-bf87-b83f6e51ee10" containerName="collect-profiles" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.643754 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c695d644-6e66-4be0-972e-826a13ec0756" containerName="route-controller-manager" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.646344 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.651328 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-88rsv"] Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.652446 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.652761 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.653122 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.653328 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.653373 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.655271 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.748990 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xk7\" (UniqueName: \"kubernetes.io/projected/e5b48ba9-5f07-4e92-a22e-8211b01141fb-kube-api-access-29xk7\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.749124 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b48ba9-5f07-4e92-a22e-8211b01141fb-config\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.749225 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5b48ba9-5f07-4e92-a22e-8211b01141fb-client-ca\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.749297 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5b48ba9-5f07-4e92-a22e-8211b01141fb-serving-cert\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.850665 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5b48ba9-5f07-4e92-a22e-8211b01141fb-client-ca\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.850803 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5b48ba9-5f07-4e92-a22e-8211b01141fb-serving-cert\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.850847 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xk7\" (UniqueName: \"kubernetes.io/projected/e5b48ba9-5f07-4e92-a22e-8211b01141fb-kube-api-access-29xk7\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.850886 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b48ba9-5f07-4e92-a22e-8211b01141fb-config\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.852388 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5b48ba9-5f07-4e92-a22e-8211b01141fb-client-ca\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.852805 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b48ba9-5f07-4e92-a22e-8211b01141fb-config\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.863262 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5b48ba9-5f07-4e92-a22e-8211b01141fb-serving-cert\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.878738 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xk7\" (UniqueName: \"kubernetes.io/projected/e5b48ba9-5f07-4e92-a22e-8211b01141fb-kube-api-access-29xk7\") pod \"route-controller-manager-774665f746-88rsv\" (UID: \"e5b48ba9-5f07-4e92-a22e-8211b01141fb\") " pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:06 crc kubenswrapper[4922]: I0126 14:15:06.976359 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:07 crc kubenswrapper[4922]: I0126 14:15:07.128100 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c695d644-6e66-4be0-972e-826a13ec0756" path="/var/lib/kubelet/pods/c695d644-6e66-4be0-972e-826a13ec0756/volumes" Jan 26 14:15:07 crc kubenswrapper[4922]: I0126 14:15:07.455892 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774665f746-88rsv"] Jan 26 14:15:08 crc kubenswrapper[4922]: I0126 14:15:08.341780 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" event={"ID":"e5b48ba9-5f07-4e92-a22e-8211b01141fb","Type":"ContainerStarted","Data":"aae60106f1f0df4c050bc954277c4c5dd26725bb5dd8b6dc66136659946f18a9"} Jan 26 14:15:08 crc kubenswrapper[4922]: I0126 14:15:08.342416 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:08 crc kubenswrapper[4922]: I0126 14:15:08.342437 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" event={"ID":"e5b48ba9-5f07-4e92-a22e-8211b01141fb","Type":"ContainerStarted","Data":"af1d396442883627de2826070d84ec8fd5c4a844f2612f0ea9fa0d8dcb778297"} Jan 26 14:15:08 crc kubenswrapper[4922]: I0126 14:15:08.355109 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" Jan 26 14:15:08 crc kubenswrapper[4922]: I0126 14:15:08.371656 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-774665f746-88rsv" podStartSLOduration=3.371622044 podStartE2EDuration="3.371622044s" podCreationTimestamp="2026-01-26 14:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:15:08.370381727 +0000 UTC m=+325.572644529" watchObservedRunningTime="2026-01-26 14:15:08.371622044 +0000 UTC m=+325.573884816" Jan 26 14:15:41 crc kubenswrapper[4922]: I0126 14:15:41.307242 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:15:41 crc kubenswrapper[4922]: I0126 14:15:41.308147 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.096450 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ppv85"] Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.098034 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.114642 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ppv85"] Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.270921 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-bound-sa-token\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271268 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/99129fea-1e02-4a05-8504-8d25f4c4c92b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271325 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99129fea-1e02-4a05-8504-8d25f4c4c92b-trusted-ca\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271449 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271514 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/99129fea-1e02-4a05-8504-8d25f4c4c92b-registry-certificates\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271560 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-registry-tls\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271623 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2zt\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-kube-api-access-mm2zt\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.271685 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/99129fea-1e02-4a05-8504-8d25f4c4c92b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.298366 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.373729 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/99129fea-1e02-4a05-8504-8d25f4c4c92b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.373817 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99129fea-1e02-4a05-8504-8d25f4c4c92b-trusted-ca\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.373911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/99129fea-1e02-4a05-8504-8d25f4c4c92b-registry-certificates\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.373951 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-registry-tls\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.374006 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2zt\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-kube-api-access-mm2zt\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.374056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/99129fea-1e02-4a05-8504-8d25f4c4c92b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.374177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-bound-sa-token\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.375113 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/99129fea-1e02-4a05-8504-8d25f4c4c92b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.377306 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/99129fea-1e02-4a05-8504-8d25f4c4c92b-registry-certificates\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.380962 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99129fea-1e02-4a05-8504-8d25f4c4c92b-trusted-ca\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.389856 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/99129fea-1e02-4a05-8504-8d25f4c4c92b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.391237 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-registry-tls\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.402814 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2zt\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-kube-api-access-mm2zt\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.412510 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99129fea-1e02-4a05-8504-8d25f4c4c92b-bound-sa-token\") pod \"image-registry-66df7c8f76-ppv85\" (UID: \"99129fea-1e02-4a05-8504-8d25f4c4c92b\") " pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.424399 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:44 crc kubenswrapper[4922]: I0126 14:15:44.876450 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ppv85"] Jan 26 14:15:44 crc kubenswrapper[4922]: W0126 14:15:44.882198 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99129fea_1e02_4a05_8504_8d25f4c4c92b.slice/crio-b044c7fd0f473bcec7e82aa7db4d39bab04942862c72a84b44a776fa3145b05b WatchSource:0}: Error finding container b044c7fd0f473bcec7e82aa7db4d39bab04942862c72a84b44a776fa3145b05b: Status 404 returned error can't find the container with id b044c7fd0f473bcec7e82aa7db4d39bab04942862c72a84b44a776fa3145b05b Jan 26 14:15:45 crc kubenswrapper[4922]: I0126 14:15:45.597777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" event={"ID":"99129fea-1e02-4a05-8504-8d25f4c4c92b","Type":"ContainerStarted","Data":"8b597a7e10c6e83db0f0687736dc19da0d79b702ffe0520a60852bc7c4169e95"} Jan 26 14:15:45 crc kubenswrapper[4922]: I0126 14:15:45.597882 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" event={"ID":"99129fea-1e02-4a05-8504-8d25f4c4c92b","Type":"ContainerStarted","Data":"b044c7fd0f473bcec7e82aa7db4d39bab04942862c72a84b44a776fa3145b05b"} Jan 26 14:15:45 crc kubenswrapper[4922]: I0126 14:15:45.598029 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:15:45 crc kubenswrapper[4922]: I0126 14:15:45.627650 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" podStartSLOduration=1.6276274590000002 podStartE2EDuration="1.627627459s" podCreationTimestamp="2026-01-26 14:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:15:45.623996469 +0000 UTC m=+362.826259271" watchObservedRunningTime="2026-01-26 14:15:45.627627459 +0000 UTC m=+362.829890261" Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.845175 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfx9p"] Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.848271 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.850920 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.861601 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfx9p"] Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.941402 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrn2\" (UniqueName: \"kubernetes.io/projected/8da77431-c903-448f-892c-89371c3092d4-kube-api-access-qtrn2\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.942032 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-utilities\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:01 crc kubenswrapper[4922]: I0126 14:16:01.942149 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-catalog-content\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.040617 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kj8wd"] Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.042205 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.043678 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-utilities\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.043780 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-catalog-content\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.044014 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrn2\" (UniqueName: \"kubernetes.io/projected/8da77431-c903-448f-892c-89371c3092d4-kube-api-access-qtrn2\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.045604 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-utilities\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.046369 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.046667 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-catalog-content\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.052435 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kj8wd"] Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.071908 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrn2\" (UniqueName: \"kubernetes.io/projected/8da77431-c903-448f-892c-89371c3092d4-kube-api-access-qtrn2\") pod \"certified-operators-sfx9p\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.145810 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f24cd2f-292d-4fd4-9239-c18de70680ad-utilities\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.145883 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmg4j\" (UniqueName: \"kubernetes.io/projected/8f24cd2f-292d-4fd4-9239-c18de70680ad-kube-api-access-xmg4j\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.145951 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f24cd2f-292d-4fd4-9239-c18de70680ad-catalog-content\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.168759 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.248159 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmg4j\" (UniqueName: \"kubernetes.io/projected/8f24cd2f-292d-4fd4-9239-c18de70680ad-kube-api-access-xmg4j\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.248260 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f24cd2f-292d-4fd4-9239-c18de70680ad-catalog-content\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.248322 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f24cd2f-292d-4fd4-9239-c18de70680ad-utilities\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.249108 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f24cd2f-292d-4fd4-9239-c18de70680ad-utilities\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.251590 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f24cd2f-292d-4fd4-9239-c18de70680ad-catalog-content\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.277181 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmg4j\" (UniqueName: \"kubernetes.io/projected/8f24cd2f-292d-4fd4-9239-c18de70680ad-kube-api-access-xmg4j\") pod \"redhat-operators-kj8wd\" (UID: \"8f24cd2f-292d-4fd4-9239-c18de70680ad\") " pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.367012 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.407801 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfx9p"] Jan 26 14:16:02 crc kubenswrapper[4922]: W0126 14:16:02.413412 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da77431_c903_448f_892c_89371c3092d4.slice/crio-7beaef39d64165b006259462937a5d69ac4e07a335a98a1e2faf31528c1a1d3a WatchSource:0}: Error finding container 7beaef39d64165b006259462937a5d69ac4e07a335a98a1e2faf31528c1a1d3a: Status 404 returned error can't find the container with id 7beaef39d64165b006259462937a5d69ac4e07a335a98a1e2faf31528c1a1d3a Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.572267 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kj8wd"] Jan 26 14:16:02 crc kubenswrapper[4922]: W0126 14:16:02.582802 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f24cd2f_292d_4fd4_9239_c18de70680ad.slice/crio-d25e74db18e864faf48ab9df2d073075ee4ac06cffc23804b3666bf493132913 WatchSource:0}: Error finding container d25e74db18e864faf48ab9df2d073075ee4ac06cffc23804b3666bf493132913: Status 404 returned error can't find the container with id d25e74db18e864faf48ab9df2d073075ee4ac06cffc23804b3666bf493132913 Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.762500 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj8wd" event={"ID":"8f24cd2f-292d-4fd4-9239-c18de70680ad","Type":"ContainerStarted","Data":"d25e74db18e864faf48ab9df2d073075ee4ac06cffc23804b3666bf493132913"} Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.766043 4922 generic.go:334] "Generic (PLEG): container finished" podID="8da77431-c903-448f-892c-89371c3092d4" containerID="0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2" exitCode=0 Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.766093 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfx9p" event={"ID":"8da77431-c903-448f-892c-89371c3092d4","Type":"ContainerDied","Data":"0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2"} Jan 26 14:16:02 crc kubenswrapper[4922]: I0126 14:16:02.766114 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfx9p" event={"ID":"8da77431-c903-448f-892c-89371c3092d4","Type":"ContainerStarted","Data":"7beaef39d64165b006259462937a5d69ac4e07a335a98a1e2faf31528c1a1d3a"} Jan 26 14:16:03 crc kubenswrapper[4922]: I0126 14:16:03.776245 4922 generic.go:334] "Generic (PLEG): container finished" podID="8f24cd2f-292d-4fd4-9239-c18de70680ad" containerID="19f9825be4baedcefcdccde9e95f0a11cba9bf25b64dad98a9a291ccd4a5bb38" exitCode=0 Jan 26 14:16:03 crc kubenswrapper[4922]: I0126 14:16:03.776667 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj8wd" event={"ID":"8f24cd2f-292d-4fd4-9239-c18de70680ad","Type":"ContainerDied","Data":"19f9825be4baedcefcdccde9e95f0a11cba9bf25b64dad98a9a291ccd4a5bb38"} Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.243226 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cshmk"] Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.245173 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.249193 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.265748 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cshmk"] Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.395599 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-utilities\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.395688 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-catalog-content\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.396041 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89fs\" (UniqueName: \"kubernetes.io/projected/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-kube-api-access-j89fs\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.447361 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ppv85" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.450915 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w4dhl"] Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.456902 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.465754 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.470694 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4dhl"] Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.497025 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-utilities\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.497098 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-catalog-content\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.497163 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89fs\" (UniqueName: \"kubernetes.io/projected/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-kube-api-access-j89fs\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.499174 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-utilities\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.499648 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-catalog-content\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.531450 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89fs\" (UniqueName: \"kubernetes.io/projected/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-kube-api-access-j89fs\") pod \"community-operators-cshmk\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.554035 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dst2r"] Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.575308 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.599218 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-catalog-content\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.599283 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzpd4\" (UniqueName: \"kubernetes.io/projected/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-kube-api-access-nzpd4\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.599377 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-utilities\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.702363 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-catalog-content\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.701671 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-catalog-content\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.702890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzpd4\" (UniqueName: \"kubernetes.io/projected/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-kube-api-access-nzpd4\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.702930 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-utilities\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.703226 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-utilities\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.740666 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzpd4\" (UniqueName: \"kubernetes.io/projected/bb7e36f7-a7c0-4fff-8b81-77738ded90e2-kube-api-access-nzpd4\") pod \"redhat-marketplace-w4dhl\" (UID: \"bb7e36f7-a7c0-4fff-8b81-77738ded90e2\") " pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.787889 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj8wd" event={"ID":"8f24cd2f-292d-4fd4-9239-c18de70680ad","Type":"ContainerStarted","Data":"a2e6fe49ab88b80c45aae3195db635a7f39c566e234f9c568f142b49eefb6a20"} Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.793245 4922 generic.go:334] "Generic (PLEG): container finished" podID="8da77431-c903-448f-892c-89371c3092d4" containerID="91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70" exitCode=0 Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.793375 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfx9p" event={"ID":"8da77431-c903-448f-892c-89371c3092d4","Type":"ContainerDied","Data":"91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70"} Jan 26 14:16:04 crc kubenswrapper[4922]: I0126 14:16:04.796849 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.006048 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cshmk"] Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.048782 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4dhl"] Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.799470 4922 generic.go:334] "Generic (PLEG): container finished" podID="bb7e36f7-a7c0-4fff-8b81-77738ded90e2" containerID="85dabe25a131f1d68f1a906dea52bd221defa0b16a13685e5cf1bd380c722d96" exitCode=0 Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.799577 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4dhl" event={"ID":"bb7e36f7-a7c0-4fff-8b81-77738ded90e2","Type":"ContainerDied","Data":"85dabe25a131f1d68f1a906dea52bd221defa0b16a13685e5cf1bd380c722d96"} Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.800403 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4dhl" event={"ID":"bb7e36f7-a7c0-4fff-8b81-77738ded90e2","Type":"ContainerStarted","Data":"5c1b98d05d90f5022e677f93d7e1509ea7407b71fa928313bf2fe9bc54cf2e46"} Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.803766 4922 generic.go:334] "Generic (PLEG): container finished" podID="8f24cd2f-292d-4fd4-9239-c18de70680ad" containerID="a2e6fe49ab88b80c45aae3195db635a7f39c566e234f9c568f142b49eefb6a20" exitCode=0 Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.803825 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj8wd" event={"ID":"8f24cd2f-292d-4fd4-9239-c18de70680ad","Type":"ContainerDied","Data":"a2e6fe49ab88b80c45aae3195db635a7f39c566e234f9c568f142b49eefb6a20"} Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.812681 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfx9p" event={"ID":"8da77431-c903-448f-892c-89371c3092d4","Type":"ContainerStarted","Data":"b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025"} Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.814572 4922 generic.go:334] "Generic (PLEG): container finished" podID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerID="897e25104e2e37d53b1c116ac72cfd6113afaddb7e962c35f337b79cf6b99881" exitCode=0 Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.814649 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cshmk" event={"ID":"5e368904-69fc-43c6-b5d1-9e4bfdf7e402","Type":"ContainerDied","Data":"897e25104e2e37d53b1c116ac72cfd6113afaddb7e962c35f337b79cf6b99881"} Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.814692 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cshmk" event={"ID":"5e368904-69fc-43c6-b5d1-9e4bfdf7e402","Type":"ContainerStarted","Data":"0239e6c2ad972ba2cbf392a02638ea7d5de825a9305042c6c5cbaa6ad1996a2d"} Jan 26 14:16:05 crc kubenswrapper[4922]: I0126 14:16:05.876220 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfx9p" podStartSLOduration=2.406737562 podStartE2EDuration="4.876198966s" podCreationTimestamp="2026-01-26 14:16:01 +0000 UTC" firstStartedPulling="2026-01-26 14:16:02.769907408 +0000 UTC m=+379.972170190" lastFinishedPulling="2026-01-26 14:16:05.239368812 +0000 UTC m=+382.441631594" observedRunningTime="2026-01-26 14:16:05.873893596 +0000 UTC m=+383.076156378" watchObservedRunningTime="2026-01-26 14:16:05.876198966 +0000 UTC m=+383.078461748" Jan 26 14:16:06 crc kubenswrapper[4922]: I0126 14:16:06.825649 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kj8wd" event={"ID":"8f24cd2f-292d-4fd4-9239-c18de70680ad","Type":"ContainerStarted","Data":"5c273f946d71f1ec80a45179826624938b9ae35055ac66960db2c1370a1025a2"} Jan 26 14:16:06 crc kubenswrapper[4922]: I0126 14:16:06.850339 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kj8wd" podStartSLOduration=2.334878584 podStartE2EDuration="4.850323458s" podCreationTimestamp="2026-01-26 14:16:02 +0000 UTC" firstStartedPulling="2026-01-26 14:16:03.779704999 +0000 UTC m=+380.981967781" lastFinishedPulling="2026-01-26 14:16:06.295149883 +0000 UTC m=+383.497412655" observedRunningTime="2026-01-26 14:16:06.847945186 +0000 UTC m=+384.050207958" watchObservedRunningTime="2026-01-26 14:16:06.850323458 +0000 UTC m=+384.052586230" Jan 26 14:16:07 crc kubenswrapper[4922]: I0126 14:16:07.835713 4922 generic.go:334] "Generic (PLEG): container finished" podID="bb7e36f7-a7c0-4fff-8b81-77738ded90e2" containerID="065ea1abb45c63c5422ab40dc55af2649faa37fba266c00328c249d165a4e54c" exitCode=0 Jan 26 14:16:07 crc kubenswrapper[4922]: I0126 14:16:07.835867 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4dhl" event={"ID":"bb7e36f7-a7c0-4fff-8b81-77738ded90e2","Type":"ContainerDied","Data":"065ea1abb45c63c5422ab40dc55af2649faa37fba266c00328c249d165a4e54c"} Jan 26 14:16:07 crc kubenswrapper[4922]: I0126 14:16:07.839460 4922 generic.go:334] "Generic (PLEG): container finished" podID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerID="1a51d7c0563c73292865e89929057940f04180f0457a9a4578d533dd01c1c43e" exitCode=0 Jan 26 14:16:07 crc kubenswrapper[4922]: I0126 14:16:07.839846 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cshmk" event={"ID":"5e368904-69fc-43c6-b5d1-9e4bfdf7e402","Type":"ContainerDied","Data":"1a51d7c0563c73292865e89929057940f04180f0457a9a4578d533dd01c1c43e"} Jan 26 14:16:08 crc kubenswrapper[4922]: I0126 14:16:08.852306 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cshmk" event={"ID":"5e368904-69fc-43c6-b5d1-9e4bfdf7e402","Type":"ContainerStarted","Data":"184a6c355527b1725ed00cc4cfdd2315c82887c44240e36c5cfce4542597716d"} Jan 26 14:16:08 crc kubenswrapper[4922]: I0126 14:16:08.861827 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4dhl" event={"ID":"bb7e36f7-a7c0-4fff-8b81-77738ded90e2","Type":"ContainerStarted","Data":"0f147f559bcf2792727f6e159bdea9803d05bd540d86ba5376f4fac880b3115b"} Jan 26 14:16:08 crc kubenswrapper[4922]: I0126 14:16:08.883401 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cshmk" podStartSLOduration=2.398074714 podStartE2EDuration="4.883370596s" podCreationTimestamp="2026-01-26 14:16:04 +0000 UTC" firstStartedPulling="2026-01-26 14:16:05.816953474 +0000 UTC m=+383.019216276" lastFinishedPulling="2026-01-26 14:16:08.302249356 +0000 UTC m=+385.504512158" observedRunningTime="2026-01-26 14:16:08.882732607 +0000 UTC m=+386.084995389" watchObservedRunningTime="2026-01-26 14:16:08.883370596 +0000 UTC m=+386.085633378" Jan 26 14:16:08 crc kubenswrapper[4922]: I0126 14:16:08.905233 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w4dhl" podStartSLOduration=2.364623592 podStartE2EDuration="4.905210796s" podCreationTimestamp="2026-01-26 14:16:04 +0000 UTC" firstStartedPulling="2026-01-26 14:16:05.803040634 +0000 UTC m=+383.005303406" lastFinishedPulling="2026-01-26 14:16:08.343627838 +0000 UTC m=+385.545890610" observedRunningTime="2026-01-26 14:16:08.903007 +0000 UTC m=+386.105269782" watchObservedRunningTime="2026-01-26 14:16:08.905210796 +0000 UTC m=+386.107473578" Jan 26 14:16:11 crc kubenswrapper[4922]: I0126 14:16:11.308007 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:16:11 crc kubenswrapper[4922]: I0126 14:16:11.308132 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:16:12 crc kubenswrapper[4922]: I0126 14:16:12.169533 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:12 crc kubenswrapper[4922]: I0126 14:16:12.170025 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:12 crc kubenswrapper[4922]: I0126 14:16:12.224958 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:12 crc kubenswrapper[4922]: I0126 14:16:12.367825 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:12 crc kubenswrapper[4922]: I0126 14:16:12.367909 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:12 crc kubenswrapper[4922]: I0126 14:16:12.952354 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 14:16:13 crc kubenswrapper[4922]: I0126 14:16:13.419756 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kj8wd" podUID="8f24cd2f-292d-4fd4-9239-c18de70680ad" containerName="registry-server" probeResult="failure" output=< Jan 26 14:16:13 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:16:13 crc kubenswrapper[4922]: > Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.576018 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.576117 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.644535 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.799186 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.799333 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.864385 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.966532 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cshmk" Jan 26 14:16:14 crc kubenswrapper[4922]: I0126 14:16:14.975725 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w4dhl" Jan 26 14:16:22 crc kubenswrapper[4922]: I0126 14:16:22.437715 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:22 crc kubenswrapper[4922]: I0126 14:16:22.513118 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kj8wd" Jan 26 14:16:29 crc kubenswrapper[4922]: I0126 14:16:29.609776 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" podUID="49958f99-8b05-4ebb-9eb6-396020c374eb" containerName="registry" containerID="cri-o://dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d" gracePeriod=30 Jan 26 14:16:30 crc kubenswrapper[4922]: I0126 14:16:30.557564 4922 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-dst2r container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.6:5000/healthz\": dial tcp 10.217.0.6:5000: connect: connection refused" start-of-body= Jan 26 14:16:30 crc kubenswrapper[4922]: I0126 14:16:30.557654 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" podUID="49958f99-8b05-4ebb-9eb6-396020c374eb" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.6:5000/healthz\": dial tcp 10.217.0.6:5000: connect: connection refused" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.871287 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.962920 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-tls\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963012 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49958f99-8b05-4ebb-9eb6-396020c374eb-ca-trust-extracted\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963083 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-certificates\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963220 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963244 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-bound-sa-token\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963275 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp55n\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-kube-api-access-wp55n\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963310 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49958f99-8b05-4ebb-9eb6-396020c374eb-installation-pull-secrets\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.963337 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-trusted-ca\") pod \"49958f99-8b05-4ebb-9eb6-396020c374eb\" (UID: \"49958f99-8b05-4ebb-9eb6-396020c374eb\") " Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.964481 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.965416 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.971698 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49958f99-8b05-4ebb-9eb6-396020c374eb-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.971909 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-kube-api-access-wp55n" (OuterVolumeSpecName: "kube-api-access-wp55n") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "kube-api-access-wp55n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.973354 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.974053 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.977613 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 14:16:31 crc kubenswrapper[4922]: I0126 14:16:31.985443 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49958f99-8b05-4ebb-9eb6-396020c374eb-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "49958f99-8b05-4ebb-9eb6-396020c374eb" (UID: "49958f99-8b05-4ebb-9eb6-396020c374eb"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.016694 4922 generic.go:334] "Generic (PLEG): container finished" podID="49958f99-8b05-4ebb-9eb6-396020c374eb" containerID="dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d" exitCode=0 Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.016762 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" event={"ID":"49958f99-8b05-4ebb-9eb6-396020c374eb","Type":"ContainerDied","Data":"dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d"} Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.016804 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" event={"ID":"49958f99-8b05-4ebb-9eb6-396020c374eb","Type":"ContainerDied","Data":"d620a1a67708cf675f5db9ca6196200fcb894ef7bc7246a3d81ed6c319be391d"} Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.016828 4922 scope.go:117] "RemoveContainer" containerID="dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.016958 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dst2r" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.047839 4922 scope.go:117] "RemoveContainer" containerID="dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d" Jan 26 14:16:32 crc kubenswrapper[4922]: E0126 14:16:32.048641 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d\": container with ID starting with dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d not found: ID does not exist" containerID="dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.048712 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d"} err="failed to get container status \"dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d\": rpc error: code = NotFound desc = could not find container \"dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d\": container with ID starting with dbd297ef58e356327b1e2a5a54469597bc29999014362704ce798c516a16a13d not found: ID does not exist" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068584 4922 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068635 4922 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49958f99-8b05-4ebb-9eb6-396020c374eb-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068654 4922 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068673 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068689 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp55n\" (UniqueName: \"kubernetes.io/projected/49958f99-8b05-4ebb-9eb6-396020c374eb-kube-api-access-wp55n\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068744 4922 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49958f99-8b05-4ebb-9eb6-396020c374eb-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.068760 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49958f99-8b05-4ebb-9eb6-396020c374eb-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.070357 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dst2r"] Jan 26 14:16:32 crc kubenswrapper[4922]: I0126 14:16:32.078528 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dst2r"] Jan 26 14:16:33 crc kubenswrapper[4922]: I0126 14:16:33.105598 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49958f99-8b05-4ebb-9eb6-396020c374eb" path="/var/lib/kubelet/pods/49958f99-8b05-4ebb-9eb6-396020c374eb/volumes" Jan 26 14:16:41 crc kubenswrapper[4922]: I0126 14:16:41.306828 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:16:41 crc kubenswrapper[4922]: I0126 14:16:41.307508 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:16:41 crc kubenswrapper[4922]: I0126 14:16:41.307565 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:16:41 crc kubenswrapper[4922]: I0126 14:16:41.309308 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6c7dbd41ad163fa2c442937e7ba458ff681ccb83b22aad8b12d1a7403d8aa48"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:16:41 crc kubenswrapper[4922]: I0126 14:16:41.309397 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://e6c7dbd41ad163fa2c442937e7ba458ff681ccb83b22aad8b12d1a7403d8aa48" gracePeriod=600 Jan 26 14:16:42 crc kubenswrapper[4922]: I0126 14:16:42.093898 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="e6c7dbd41ad163fa2c442937e7ba458ff681ccb83b22aad8b12d1a7403d8aa48" exitCode=0 Jan 26 14:16:42 crc kubenswrapper[4922]: I0126 14:16:42.094016 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"e6c7dbd41ad163fa2c442937e7ba458ff681ccb83b22aad8b12d1a7403d8aa48"} Jan 26 14:16:42 crc kubenswrapper[4922]: I0126 14:16:42.094264 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"e6cf20dd05518b0703e95fb22d716a156a551a6feacd1fad61c7d301c3595e35"} Jan 26 14:16:42 crc kubenswrapper[4922]: I0126 14:16:42.094290 4922 scope.go:117] "RemoveContainer" containerID="f111724a8f80719e89f4adfbaad88f1cae802acc526a57f5be05de231a622117" Jan 26 14:18:41 crc kubenswrapper[4922]: I0126 14:18:41.307747 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:18:41 crc kubenswrapper[4922]: I0126 14:18:41.308659 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:19:11 crc kubenswrapper[4922]: I0126 14:19:11.307470 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:19:11 crc kubenswrapper[4922]: I0126 14:19:11.307980 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:19:41 crc kubenswrapper[4922]: I0126 14:19:41.306952 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:19:41 crc kubenswrapper[4922]: I0126 14:19:41.308217 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:19:41 crc kubenswrapper[4922]: I0126 14:19:41.308293 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:19:41 crc kubenswrapper[4922]: I0126 14:19:41.309204 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6cf20dd05518b0703e95fb22d716a156a551a6feacd1fad61c7d301c3595e35"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:19:41 crc kubenswrapper[4922]: I0126 14:19:41.309308 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://e6cf20dd05518b0703e95fb22d716a156a551a6feacd1fad61c7d301c3595e35" gracePeriod=600 Jan 26 14:19:42 crc kubenswrapper[4922]: I0126 14:19:42.338229 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="e6cf20dd05518b0703e95fb22d716a156a551a6feacd1fad61c7d301c3595e35" exitCode=0 Jan 26 14:19:42 crc kubenswrapper[4922]: I0126 14:19:42.338327 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"e6cf20dd05518b0703e95fb22d716a156a551a6feacd1fad61c7d301c3595e35"} Jan 26 14:19:42 crc kubenswrapper[4922]: I0126 14:19:42.339114 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"4826d47a8978aad8de4d72d3370de97b0ca58dd44ff72fdfe2ad2319c73f0def"} Jan 26 14:19:42 crc kubenswrapper[4922]: I0126 14:19:42.339150 4922 scope.go:117] "RemoveContainer" containerID="e6c7dbd41ad163fa2c442937e7ba458ff681ccb83b22aad8b12d1a7403d8aa48" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.848703 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht"] Jan 26 14:20:22 crc kubenswrapper[4922]: E0126 14:20:22.849509 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49958f99-8b05-4ebb-9eb6-396020c374eb" containerName="registry" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.849524 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="49958f99-8b05-4ebb-9eb6-396020c374eb" containerName="registry" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.849641 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="49958f99-8b05-4ebb-9eb6-396020c374eb" containerName="registry" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.850040 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.852695 4922 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tbkwc" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.854941 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.855647 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.862757 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vv74v"] Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.864007 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vv74v" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.865825 4922 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tsmwt" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.866273 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht"] Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.883240 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hdzlp"] Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.884164 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.887040 4922 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jfnmk" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.887192 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vv74v"] Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.891181 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hdzlp"] Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.975516 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7p95\" (UniqueName: \"kubernetes.io/projected/0ac6b35b-af7a-4913-985e-8d42d2f246f9-kube-api-access-p7p95\") pod \"cert-manager-858654f9db-vv74v\" (UID: \"0ac6b35b-af7a-4913-985e-8d42d2f246f9\") " pod="cert-manager/cert-manager-858654f9db-vv74v" Jan 26 14:20:22 crc kubenswrapper[4922]: I0126 14:20:22.975763 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfz8k\" (UniqueName: \"kubernetes.io/projected/90a11b15-590d-43f0-957a-67389e3cd75b-kube-api-access-bfz8k\") pod \"cert-manager-cainjector-cf98fcc89-vw7ht\" (UID: \"90a11b15-590d-43f0-957a-67389e3cd75b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.076925 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgx5z\" (UniqueName: \"kubernetes.io/projected/0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124-kube-api-access-tgx5z\") pod \"cert-manager-webhook-687f57d79b-hdzlp\" (UID: \"0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.077011 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7p95\" (UniqueName: \"kubernetes.io/projected/0ac6b35b-af7a-4913-985e-8d42d2f246f9-kube-api-access-p7p95\") pod \"cert-manager-858654f9db-vv74v\" (UID: \"0ac6b35b-af7a-4913-985e-8d42d2f246f9\") " pod="cert-manager/cert-manager-858654f9db-vv74v" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.077225 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfz8k\" (UniqueName: \"kubernetes.io/projected/90a11b15-590d-43f0-957a-67389e3cd75b-kube-api-access-bfz8k\") pod \"cert-manager-cainjector-cf98fcc89-vw7ht\" (UID: \"90a11b15-590d-43f0-957a-67389e3cd75b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.094682 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfz8k\" (UniqueName: \"kubernetes.io/projected/90a11b15-590d-43f0-957a-67389e3cd75b-kube-api-access-bfz8k\") pod \"cert-manager-cainjector-cf98fcc89-vw7ht\" (UID: \"90a11b15-590d-43f0-957a-67389e3cd75b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.095368 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7p95\" (UniqueName: \"kubernetes.io/projected/0ac6b35b-af7a-4913-985e-8d42d2f246f9-kube-api-access-p7p95\") pod \"cert-manager-858654f9db-vv74v\" (UID: \"0ac6b35b-af7a-4913-985e-8d42d2f246f9\") " pod="cert-manager/cert-manager-858654f9db-vv74v" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.166149 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.178725 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgx5z\" (UniqueName: \"kubernetes.io/projected/0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124-kube-api-access-tgx5z\") pod \"cert-manager-webhook-687f57d79b-hdzlp\" (UID: \"0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.199970 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgx5z\" (UniqueName: \"kubernetes.io/projected/0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124-kube-api-access-tgx5z\") pod \"cert-manager-webhook-687f57d79b-hdzlp\" (UID: \"0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124\") " pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.232564 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vv74v" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.240970 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.400583 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht"] Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.407934 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.468344 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vv74v"] Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.519758 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-hdzlp"] Jan 26 14:20:23 crc kubenswrapper[4922]: W0126 14:20:23.522110 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cfa6d3f_9300_4d9a_b0d7_c1c321bb0124.slice/crio-ce5c679079d4f670a6384e8b168e6bdde086678b29d241e7ec823cc479892fd6 WatchSource:0}: Error finding container ce5c679079d4f670a6384e8b168e6bdde086678b29d241e7ec823cc479892fd6: Status 404 returned error can't find the container with id ce5c679079d4f670a6384e8b168e6bdde086678b29d241e7ec823cc479892fd6 Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.622481 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" event={"ID":"0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124","Type":"ContainerStarted","Data":"ce5c679079d4f670a6384e8b168e6bdde086678b29d241e7ec823cc479892fd6"} Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.625277 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" event={"ID":"90a11b15-590d-43f0-957a-67389e3cd75b","Type":"ContainerStarted","Data":"171d92453fd3b5b123ce0f5338a210ec4cfa0550b43ecde94dd5d5fed6a7e7ad"} Jan 26 14:20:23 crc kubenswrapper[4922]: I0126 14:20:23.626350 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vv74v" event={"ID":"0ac6b35b-af7a-4913-985e-8d42d2f246f9","Type":"ContainerStarted","Data":"2e44db0457c11b50f4cecb089a762f366bd2bf0f603963505b90740c472245d5"} Jan 26 14:20:30 crc kubenswrapper[4922]: I0126 14:20:30.675155 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vv74v" event={"ID":"0ac6b35b-af7a-4913-985e-8d42d2f246f9","Type":"ContainerStarted","Data":"5d0a93a10996612593ff1e4956885c17971e3ceabb398ee8005c8529082f118e"} Jan 26 14:20:30 crc kubenswrapper[4922]: I0126 14:20:30.702963 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vv74v" podStartSLOduration=2.578729233 podStartE2EDuration="8.702741232s" podCreationTimestamp="2026-01-26 14:20:22 +0000 UTC" firstStartedPulling="2026-01-26 14:20:23.479651576 +0000 UTC m=+640.681914338" lastFinishedPulling="2026-01-26 14:20:29.603663565 +0000 UTC m=+646.805926337" observedRunningTime="2026-01-26 14:20:30.695171859 +0000 UTC m=+647.897434631" watchObservedRunningTime="2026-01-26 14:20:30.702741232 +0000 UTC m=+647.905004004" Jan 26 14:20:31 crc kubenswrapper[4922]: I0126 14:20:31.683940 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" event={"ID":"0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124","Type":"ContainerStarted","Data":"e16129ea28f82885b4a4c9623537c0a76f3b4dc152691d6ced91cc315f8e566e"} Jan 26 14:20:31 crc kubenswrapper[4922]: I0126 14:20:31.684454 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:31 crc kubenswrapper[4922]: I0126 14:20:31.686208 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" event={"ID":"90a11b15-590d-43f0-957a-67389e3cd75b","Type":"ContainerStarted","Data":"75c5e953e2465630d065efecb8bbb92a3e226a6e38418e51c7761b434989f1e2"} Jan 26 14:20:31 crc kubenswrapper[4922]: I0126 14:20:31.705024 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" podStartSLOduration=2.427777892 podStartE2EDuration="9.705001123s" podCreationTimestamp="2026-01-26 14:20:22 +0000 UTC" firstStartedPulling="2026-01-26 14:20:23.526577198 +0000 UTC m=+640.728839970" lastFinishedPulling="2026-01-26 14:20:30.803800429 +0000 UTC m=+648.006063201" observedRunningTime="2026-01-26 14:20:31.703031097 +0000 UTC m=+648.905293899" watchObservedRunningTime="2026-01-26 14:20:31.705001123 +0000 UTC m=+648.907263895" Jan 26 14:20:31 crc kubenswrapper[4922]: I0126 14:20:31.721190 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vw7ht" podStartSLOduration=2.512339389 podStartE2EDuration="9.721162759s" podCreationTimestamp="2026-01-26 14:20:22 +0000 UTC" firstStartedPulling="2026-01-26 14:20:23.407656912 +0000 UTC m=+640.609919684" lastFinishedPulling="2026-01-26 14:20:30.616480242 +0000 UTC m=+647.818743054" observedRunningTime="2026-01-26 14:20:31.71733295 +0000 UTC m=+648.919595792" watchObservedRunningTime="2026-01-26 14:20:31.721162759 +0000 UTC m=+648.923425561" Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.739363 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m7p9"] Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740345 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-controller" containerID="cri-o://b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740399 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="nbdb" containerID="cri-o://42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740488 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="sbdb" containerID="cri-o://c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740592 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-node" containerID="cri-o://3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740765 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="northd" containerID="cri-o://13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740804 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-acl-logging" containerID="cri-o://eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.740815 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710" gracePeriod=30 Jan 26 14:20:32 crc kubenswrapper[4922]: I0126 14:20:32.769610 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" containerID="cri-o://8fb06ac2e79a533c1628fc31291df9c8a1c2ac28c39bc347082a4d3fa718ba74" gracePeriod=30 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.704951 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovnkube-controller/3.log" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.708055 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovn-acl-logging/0.log" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.708679 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovn-controller/0.log" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709146 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="8fb06ac2e79a533c1628fc31291df9c8a1c2ac28c39bc347082a4d3fa718ba74" exitCode=0 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709181 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09" exitCode=0 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709195 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1" exitCode=0 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709207 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e" exitCode=0 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709218 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710" exitCode=0 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709231 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4" exitCode=0 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709249 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e" exitCode=143 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709261 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerID="b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8" exitCode=143 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709271 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"8fb06ac2e79a533c1628fc31291df9c8a1c2ac28c39bc347082a4d3fa718ba74"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709354 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709376 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709396 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709434 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709443 4922 scope.go:117] "RemoveContainer" containerID="e34c42695ab5ca128d9c896f3e4f98aac2465510ba6caef89c668ed050a2aff0" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709455 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.709475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.712290 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/2.log" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.714372 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/1.log" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.714425 4922 generic.go:334] "Generic (PLEG): container finished" podID="103e8f62-57c7-4d49-b740-16d357710e61" containerID="8af1882e572fae107a17e68afc2597eb00a381ab59c787a07cad8c9e8356abeb" exitCode=2 Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.714472 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerDied","Data":"8af1882e572fae107a17e68afc2597eb00a381ab59c787a07cad8c9e8356abeb"} Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.717702 4922 scope.go:117] "RemoveContainer" containerID="8af1882e572fae107a17e68afc2597eb00a381ab59c787a07cad8c9e8356abeb" Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.717969 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9zx7f_openshift-multus(103e8f62-57c7-4d49-b740-16d357710e61)\"" pod="openshift-multus/multus-9zx7f" podUID="103e8f62-57c7-4d49-b740-16d357710e61" Jan 26 14:20:33 crc kubenswrapper[4922]: I0126 14:20:33.829838 4922 scope.go:117] "RemoveContainer" containerID="092d5ba7f7b661cf6612ee09d0b3689fe009a8532d147f77608f9d698f75d172" Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.890525 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09 is running failed: container process not found" containerID="c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.890555 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1 is running failed: container process not found" containerID="42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.890911 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09 is running failed: container process not found" containerID="c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.891436 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09 is running failed: container process not found" containerID="c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.891474 4922 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="sbdb" Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.891787 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1 is running failed: container process not found" containerID="42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.892208 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1 is running failed: container process not found" containerID="42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 26 14:20:33 crc kubenswrapper[4922]: E0126 14:20:33.892232 4922 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="nbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.154386 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovn-acl-logging/0.log" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.154944 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovn-controller/0.log" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.155582 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.218753 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pvkbg"] Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.218992 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219008 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219017 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-node" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219024 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-node" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219034 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="sbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219040 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="sbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219052 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219076 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219087 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kubecfg-setup" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219093 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kubecfg-setup" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219101 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219108 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219114 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-acl-logging" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219121 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-acl-logging" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219132 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219139 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219151 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="nbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219158 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="nbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219165 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219170 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219179 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219186 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219194 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="northd" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219200 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="northd" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219295 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219304 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="sbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219311 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219319 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-acl-logging" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219325 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219333 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovn-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219342 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="kube-rbac-proxy-node" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219353 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219361 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219368 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="nbdb" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219376 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="northd" Jan 26 14:20:34 crc kubenswrapper[4922]: E0126 14:20:34.219487 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219494 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.219575 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" containerName="ovnkube-controller" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.221148 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.332700 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-kubelet\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.332792 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-script-lib\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.332855 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-env-overrides\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.332894 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-openvswitch\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.332935 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-ovn-kubernetes\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.332972 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333042 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-slash\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333147 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333182 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333286 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-slash" (OuterVolumeSpecName: "host-slash") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333358 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-config\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333410 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-log-socket\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333443 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-var-lib-openvswitch\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333473 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333549 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-log-socket" (OuterVolumeSpecName: "log-socket") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333571 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m7cd\" (UniqueName: \"kubernetes.io/projected/ec4defeb-f2b0-4291-9147-b37e5c43da57-kube-api-access-9m7cd\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333602 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333630 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333643 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-netd\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333714 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333729 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333738 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333782 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333793 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-netns\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333833 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-etc-openvswitch\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333864 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-ovn\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333885 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333909 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-bin\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333944 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-systemd\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333986 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-node-log\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334033 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-systemd-units\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333940 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333924 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.333975 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334021 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-node-log" (OuterVolumeSpecName: "node-log") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334117 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovn-node-metrics-cert\") pod \"ec4defeb-f2b0-4291-9147-b37e5c43da57\" (UID: \"ec4defeb-f2b0-4291-9147-b37e5c43da57\") " Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334496 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovnkube-script-lib\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334638 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-var-lib-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334684 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-systemd-units\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334781 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovnkube-config\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334857 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-ovn\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334961 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-run-ovn-kubernetes\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.334982 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-env-overrides\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335011 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-run-netns\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335026 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-systemd\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-cni-bin\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335211 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovn-node-metrics-cert\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335250 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335321 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335356 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-cni-netd\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335400 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-kubelet\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335513 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-slash\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335563 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-log-socket\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335636 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767jp\" (UniqueName: \"kubernetes.io/projected/07d34138-717f-44a7-b3f5-25e7dccb0f72-kube-api-access-767jp\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335709 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-node-log\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335760 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-etc-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335863 4922 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335887 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335906 4922 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335925 4922 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335941 4922 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335959 4922 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335975 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.335992 4922 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336008 4922 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336024 4922 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336044 4922 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336151 4922 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336175 4922 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336198 4922 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336542 4922 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336559 4922 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.336576 4922 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.340874 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.341257 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4defeb-f2b0-4291-9147-b37e5c43da57-kube-api-access-9m7cd" (OuterVolumeSpecName: "kube-api-access-9m7cd") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "kube-api-access-9m7cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.355946 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ec4defeb-f2b0-4291-9147-b37e5c43da57" (UID: "ec4defeb-f2b0-4291-9147-b37e5c43da57"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.437903 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.437993 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-cni-netd\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438004 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438053 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-kubelet\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438143 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-kubelet\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438210 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-cni-netd\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438148 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-slash\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438285 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-log-socket\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438316 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-slash\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438344 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767jp\" (UniqueName: \"kubernetes.io/projected/07d34138-717f-44a7-b3f5-25e7dccb0f72-kube-api-access-767jp\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-node-log\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438430 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-log-socket\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438474 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-etc-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438535 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-etc-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438556 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovnkube-script-lib\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-node-log\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438608 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-var-lib-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438646 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-systemd-units\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438676 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovnkube-config\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438699 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-var-lib-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438714 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-ovn\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438773 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-systemd-units\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438791 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-run-ovn-kubernetes\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-env-overrides\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438927 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-ovn\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438839 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-run-ovn-kubernetes\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438984 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-systemd\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.438945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-systemd\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439059 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-run-netns\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439152 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-cni-bin\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439184 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-run-netns\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439283 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-host-cni-bin\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439205 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovn-node-metrics-cert\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439379 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439626 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m7cd\" (UniqueName: \"kubernetes.io/projected/ec4defeb-f2b0-4291-9147-b37e5c43da57-kube-api-access-9m7cd\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439651 4922 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec4defeb-f2b0-4291-9147-b37e5c43da57-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439672 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec4defeb-f2b0-4291-9147-b37e5c43da57-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439729 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/07d34138-717f-44a7-b3f5-25e7dccb0f72-run-openvswitch\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.439985 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-env-overrides\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.440049 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovnkube-config\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.440183 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovnkube-script-lib\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.442858 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07d34138-717f-44a7-b3f5-25e7dccb0f72-ovn-node-metrics-cert\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.470642 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767jp\" (UniqueName: \"kubernetes.io/projected/07d34138-717f-44a7-b3f5-25e7dccb0f72-kube-api-access-767jp\") pod \"ovnkube-node-pvkbg\" (UID: \"07d34138-717f-44a7-b3f5-25e7dccb0f72\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.541535 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:34 crc kubenswrapper[4922]: W0126 14:20:34.574227 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d34138_717f_44a7_b3f5_25e7dccb0f72.slice/crio-41a762ca41a9afb05e072722b4f5b373c8a345b84859c2116cecdf026f5f01bd WatchSource:0}: Error finding container 41a762ca41a9afb05e072722b4f5b373c8a345b84859c2116cecdf026f5f01bd: Status 404 returned error can't find the container with id 41a762ca41a9afb05e072722b4f5b373c8a345b84859c2116cecdf026f5f01bd Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.722786 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/2.log" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.724383 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"41a762ca41a9afb05e072722b4f5b373c8a345b84859c2116cecdf026f5f01bd"} Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.729396 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovn-acl-logging/0.log" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.730039 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5m7p9_ec4defeb-f2b0-4291-9147-b37e5c43da57/ovn-controller/0.log" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.730512 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" event={"ID":"ec4defeb-f2b0-4291-9147-b37e5c43da57","Type":"ContainerDied","Data":"8198241e659e47b41d2d9176758d220eba65936f01649b5928d4dd521e7dae37"} Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.730559 4922 scope.go:117] "RemoveContainer" containerID="8fb06ac2e79a533c1628fc31291df9c8a1c2ac28c39bc347082a4d3fa718ba74" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.730628 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5m7p9" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.752916 4922 scope.go:117] "RemoveContainer" containerID="c4c3ece08fc2bdb6fdc149532ec3f15200b728d6019b801ee794c96938856d09" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.768681 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m7p9"] Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.772502 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5m7p9"] Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.783507 4922 scope.go:117] "RemoveContainer" containerID="42d4e31ccbb4a067604e69daa290a91d58a3658bbaa417cbc1354c378c26d4c1" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.802307 4922 scope.go:117] "RemoveContainer" containerID="13f7db5cfc912abdfdecd22cae3110621d9027a2cbba81049dab7d804e16352e" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.820361 4922 scope.go:117] "RemoveContainer" containerID="db9a6f52964b87f22edbdda7195a1243d084616db949f577205237f43fcbf710" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.846413 4922 scope.go:117] "RemoveContainer" containerID="3585b1982a57bc92af0580f981e380fea89924f3f49c175af2dbd9c126985bb4" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.883522 4922 scope.go:117] "RemoveContainer" containerID="eebe60a2ea22ea537d3fcb8bf2731f9c7f1bdbba2dc45b2c9f1bf6aef33af16e" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.916891 4922 scope.go:117] "RemoveContainer" containerID="b5d530b6faa709e46a56b8da879d9bf846e3e4604d12288a99b88ed3c824ada8" Jan 26 14:20:34 crc kubenswrapper[4922]: I0126 14:20:34.935132 4922 scope.go:117] "RemoveContainer" containerID="0be4f1c73b0ec1ae25b249d0d43bae697189d03385c999700715c50738e82ba0" Jan 26 14:20:35 crc kubenswrapper[4922]: I0126 14:20:35.102738 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4defeb-f2b0-4291-9147-b37e5c43da57" path="/var/lib/kubelet/pods/ec4defeb-f2b0-4291-9147-b37e5c43da57/volumes" Jan 26 14:20:35 crc kubenswrapper[4922]: I0126 14:20:35.739621 4922 generic.go:334] "Generic (PLEG): container finished" podID="07d34138-717f-44a7-b3f5-25e7dccb0f72" containerID="991a798f7206038ff4af2e0c1bd321970569fa049a26373b46c29ee0bf41d19b" exitCode=0 Jan 26 14:20:35 crc kubenswrapper[4922]: I0126 14:20:35.739733 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerDied","Data":"991a798f7206038ff4af2e0c1bd321970569fa049a26373b46c29ee0bf41d19b"} Jan 26 14:20:36 crc kubenswrapper[4922]: I0126 14:20:36.753206 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"fd63bcefdfcb64bc648d1926f07fb2e82eff7d516a0fa9cb031b6e100dd0940a"} Jan 26 14:20:36 crc kubenswrapper[4922]: I0126 14:20:36.753265 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"6d2318e4b4ee489d0d4d338d9fa2a6d55708c9f427a406355b8d96c7571cfb50"} Jan 26 14:20:37 crc kubenswrapper[4922]: I0126 14:20:37.763726 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"4e854f9618ebda0013b2bed187ef78011337847cede0c2300a48f5e796fb6621"} Jan 26 14:20:37 crc kubenswrapper[4922]: I0126 14:20:37.764338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"57cafca8aa959eb66b4b539dbbba7245369c18b1d2024852c15c038c217098d0"} Jan 26 14:20:37 crc kubenswrapper[4922]: I0126 14:20:37.764355 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"be2876da48641564eb36aa4bd7ef3cf9412838a52c8931e4fc7b3bba8be9dd97"} Jan 26 14:20:37 crc kubenswrapper[4922]: I0126 14:20:37.764372 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"2531564752f0e1078ad92b152aa7b469792c22912852ec0d45d6850da0a14f69"} Jan 26 14:20:38 crc kubenswrapper[4922]: I0126 14:20:38.246454 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-hdzlp" Jan 26 14:20:40 crc kubenswrapper[4922]: I0126 14:20:40.807466 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"d1efad551cc49b5af3f18363ac9022aaa2862338ebb7a741931f0a99d4e35a72"} Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.830668 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" event={"ID":"07d34138-717f-44a7-b3f5-25e7dccb0f72","Type":"ContainerStarted","Data":"086c47ef0a7539a76e37448e291894582e8a9a74b764f174b382b1f9f24dd28c"} Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.831220 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.831381 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.831487 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.873933 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.874183 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" podStartSLOduration=8.874051659 podStartE2EDuration="8.874051659s" podCreationTimestamp="2026-01-26 14:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:20:42.869844943 +0000 UTC m=+660.072107715" watchObservedRunningTime="2026-01-26 14:20:42.874051659 +0000 UTC m=+660.076314441" Jan 26 14:20:42 crc kubenswrapper[4922]: I0126 14:20:42.884756 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:20:48 crc kubenswrapper[4922]: I0126 14:20:48.092325 4922 scope.go:117] "RemoveContainer" containerID="8af1882e572fae107a17e68afc2597eb00a381ab59c787a07cad8c9e8356abeb" Jan 26 14:20:48 crc kubenswrapper[4922]: E0126 14:20:48.093906 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9zx7f_openshift-multus(103e8f62-57c7-4d49-b740-16d357710e61)\"" pod="openshift-multus/multus-9zx7f" podUID="103e8f62-57c7-4d49-b740-16d357710e61" Jan 26 14:20:59 crc kubenswrapper[4922]: I0126 14:20:59.092807 4922 scope.go:117] "RemoveContainer" containerID="8af1882e572fae107a17e68afc2597eb00a381ab59c787a07cad8c9e8356abeb" Jan 26 14:20:59 crc kubenswrapper[4922]: I0126 14:20:59.959363 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zx7f_103e8f62-57c7-4d49-b740-16d357710e61/kube-multus/2.log" Jan 26 14:20:59 crc kubenswrapper[4922]: I0126 14:20:59.959786 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zx7f" event={"ID":"103e8f62-57c7-4d49-b740-16d357710e61","Type":"ContainerStarted","Data":"ab6a5f12a3e5e0651c155308a1cbfca03bfcb67448c6c3dd33a415ddd05a6451"} Jan 26 14:21:04 crc kubenswrapper[4922]: I0126 14:21:04.586665 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pvkbg" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.828241 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7"] Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.830521 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.833386 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.851731 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.852370 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8swlf\" (UniqueName: \"kubernetes.io/projected/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-kube-api-access-8swlf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.852705 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.854102 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7"] Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.954652 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.955294 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.955497 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.955840 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.955932 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8swlf\" (UniqueName: \"kubernetes.io/projected/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-kube-api-access-8swlf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:13 crc kubenswrapper[4922]: I0126 14:21:13.977787 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8swlf\" (UniqueName: \"kubernetes.io/projected/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-kube-api-access-8swlf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:14 crc kubenswrapper[4922]: I0126 14:21:14.161789 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:14 crc kubenswrapper[4922]: I0126 14:21:14.428671 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7"] Jan 26 14:21:15 crc kubenswrapper[4922]: I0126 14:21:15.068723 4922 generic.go:334] "Generic (PLEG): container finished" podID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerID="5958ada92dce4a043b29a80b5c386a5857eee941a21476c3129b29a15245f2e7" exitCode=0 Jan 26 14:21:15 crc kubenswrapper[4922]: I0126 14:21:15.068833 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" event={"ID":"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66","Type":"ContainerDied","Data":"5958ada92dce4a043b29a80b5c386a5857eee941a21476c3129b29a15245f2e7"} Jan 26 14:21:15 crc kubenswrapper[4922]: I0126 14:21:15.068986 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" event={"ID":"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66","Type":"ContainerStarted","Data":"a90a3b3b76bf8bcc519321c6b97876bcf45abcaa4e4da3bd2d6de93d7839e021"} Jan 26 14:21:17 crc kubenswrapper[4922]: I0126 14:21:17.083582 4922 generic.go:334] "Generic (PLEG): container finished" podID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerID="f47d07a6d0df301bcd6b64e2b0fac5cd4a1c61c5951ac200e8eb6750f2c44b71" exitCode=0 Jan 26 14:21:17 crc kubenswrapper[4922]: I0126 14:21:17.083687 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" event={"ID":"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66","Type":"ContainerDied","Data":"f47d07a6d0df301bcd6b64e2b0fac5cd4a1c61c5951ac200e8eb6750f2c44b71"} Jan 26 14:21:18 crc kubenswrapper[4922]: I0126 14:21:18.099552 4922 generic.go:334] "Generic (PLEG): container finished" podID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerID="6d4a1bf6a39a419668faee8ba313f8428096f996e565fb37525e99cfcc85cf4e" exitCode=0 Jan 26 14:21:18 crc kubenswrapper[4922]: I0126 14:21:18.099655 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" event={"ID":"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66","Type":"ContainerDied","Data":"6d4a1bf6a39a419668faee8ba313f8428096f996e565fb37525e99cfcc85cf4e"} Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.349263 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.453636 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-util\") pod \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.453992 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8swlf\" (UniqueName: \"kubernetes.io/projected/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-kube-api-access-8swlf\") pod \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.454049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-bundle\") pod \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\" (UID: \"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66\") " Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.460515 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-kube-api-access-8swlf" (OuterVolumeSpecName: "kube-api-access-8swlf") pod "c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" (UID: "c45425cd-fcc2-44ca-9f6f-6e1c9296ef66"). InnerVolumeSpecName "kube-api-access-8swlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.463199 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-bundle" (OuterVolumeSpecName: "bundle") pod "c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" (UID: "c45425cd-fcc2-44ca-9f6f-6e1c9296ef66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.512550 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-util" (OuterVolumeSpecName: "util") pod "c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" (UID: "c45425cd-fcc2-44ca-9f6f-6e1c9296ef66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.556018 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-util\") on node \"crc\" DevicePath \"\"" Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.556093 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8swlf\" (UniqueName: \"kubernetes.io/projected/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-kube-api-access-8swlf\") on node \"crc\" DevicePath \"\"" Jan 26 14:21:19 crc kubenswrapper[4922]: I0126 14:21:19.556116 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45425cd-fcc2-44ca-9f6f-6e1c9296ef66-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:21:20 crc kubenswrapper[4922]: I0126 14:21:20.131041 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" event={"ID":"c45425cd-fcc2-44ca-9f6f-6e1c9296ef66","Type":"ContainerDied","Data":"a90a3b3b76bf8bcc519321c6b97876bcf45abcaa4e4da3bd2d6de93d7839e021"} Jan 26 14:21:20 crc kubenswrapper[4922]: I0126 14:21:20.131192 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7" Jan 26 14:21:20 crc kubenswrapper[4922]: I0126 14:21:20.131211 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90a3b3b76bf8bcc519321c6b97876bcf45abcaa4e4da3bd2d6de93d7839e021" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.946219 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b"] Jan 26 14:21:27 crc kubenswrapper[4922]: E0126 14:21:27.946864 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="pull" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.946876 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="pull" Jan 26 14:21:27 crc kubenswrapper[4922]: E0126 14:21:27.946890 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="util" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.946896 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="util" Jan 26 14:21:27 crc kubenswrapper[4922]: E0126 14:21:27.946906 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="extract" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.946913 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="extract" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.947034 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45425cd-fcc2-44ca-9f6f-6e1c9296ef66" containerName="extract" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.947454 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.949773 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.949773 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.949792 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4vgfd" Jan 26 14:21:27 crc kubenswrapper[4922]: I0126 14:21:27.961044 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.061630 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.062508 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.064562 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xm8n\" (UniqueName: \"kubernetes.io/projected/3249a43d-d843-43c3-b922-be437eabb548-kube-api-access-6xm8n\") pod \"obo-prometheus-operator-68bc856cb9-xgf5b\" (UID: \"3249a43d-d843-43c3-b922-be437eabb548\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.065180 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.066300 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.066997 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.068519 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-sm6jg" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.082145 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.090030 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.165535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xm8n\" (UniqueName: \"kubernetes.io/projected/3249a43d-d843-43c3-b922-be437eabb548-kube-api-access-6xm8n\") pod \"obo-prometheus-operator-68bc856cb9-xgf5b\" (UID: \"3249a43d-d843-43c3-b922-be437eabb548\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.165597 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e30b09af-aae4-4f17-ab60-25f6f3dca352-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv\" (UID: \"e30b09af-aae4-4f17-ab60-25f6f3dca352\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.165624 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7289018a-d6ca-4075-b586-e180be982247-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9\" (UID: \"7289018a-d6ca-4075-b586-e180be982247\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.165696 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e30b09af-aae4-4f17-ab60-25f6f3dca352-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv\" (UID: \"e30b09af-aae4-4f17-ab60-25f6f3dca352\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.165750 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7289018a-d6ca-4075-b586-e180be982247-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9\" (UID: \"7289018a-d6ca-4075-b586-e180be982247\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.206609 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xm8n\" (UniqueName: \"kubernetes.io/projected/3249a43d-d843-43c3-b922-be437eabb548-kube-api-access-6xm8n\") pod \"obo-prometheus-operator-68bc856cb9-xgf5b\" (UID: \"3249a43d-d843-43c3-b922-be437eabb548\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.221993 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mct2h"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.227794 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.243023 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kxcss" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.243307 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.256985 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mct2h"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.266942 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.267750 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e30b09af-aae4-4f17-ab60-25f6f3dca352-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv\" (UID: \"e30b09af-aae4-4f17-ab60-25f6f3dca352\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.267786 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7289018a-d6ca-4075-b586-e180be982247-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9\" (UID: \"7289018a-d6ca-4075-b586-e180be982247\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.267814 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/157e0710-b880-4501-99ad-864b2f70cef5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mct2h\" (UID: \"157e0710-b880-4501-99ad-864b2f70cef5\") " pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.267841 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gll5\" (UniqueName: \"kubernetes.io/projected/157e0710-b880-4501-99ad-864b2f70cef5-kube-api-access-5gll5\") pod \"observability-operator-59bdc8b94-mct2h\" (UID: \"157e0710-b880-4501-99ad-864b2f70cef5\") " pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.267872 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e30b09af-aae4-4f17-ab60-25f6f3dca352-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv\" (UID: \"e30b09af-aae4-4f17-ab60-25f6f3dca352\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.267888 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7289018a-d6ca-4075-b586-e180be982247-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9\" (UID: \"7289018a-d6ca-4075-b586-e180be982247\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.273432 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7289018a-d6ca-4075-b586-e180be982247-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9\" (UID: \"7289018a-d6ca-4075-b586-e180be982247\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.279504 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e30b09af-aae4-4f17-ab60-25f6f3dca352-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv\" (UID: \"e30b09af-aae4-4f17-ab60-25f6f3dca352\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.285044 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7289018a-d6ca-4075-b586-e180be982247-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9\" (UID: \"7289018a-d6ca-4075-b586-e180be982247\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.290372 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e30b09af-aae4-4f17-ab60-25f6f3dca352-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv\" (UID: \"e30b09af-aae4-4f17-ab60-25f6f3dca352\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.369005 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/157e0710-b880-4501-99ad-864b2f70cef5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mct2h\" (UID: \"157e0710-b880-4501-99ad-864b2f70cef5\") " pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.369090 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gll5\" (UniqueName: \"kubernetes.io/projected/157e0710-b880-4501-99ad-864b2f70cef5-kube-api-access-5gll5\") pod \"observability-operator-59bdc8b94-mct2h\" (UID: \"157e0710-b880-4501-99ad-864b2f70cef5\") " pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.369620 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7gzrt"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.370593 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.374751 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/157e0710-b880-4501-99ad-864b2f70cef5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mct2h\" (UID: \"157e0710-b880-4501-99ad-864b2f70cef5\") " pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.383764 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.385422 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-svxn8" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.389337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gll5\" (UniqueName: \"kubernetes.io/projected/157e0710-b880-4501-99ad-864b2f70cef5-kube-api-access-5gll5\") pod \"observability-operator-59bdc8b94-mct2h\" (UID: \"157e0710-b880-4501-99ad-864b2f70cef5\") " pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.389542 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.392434 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7gzrt"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.471409 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqjw\" (UniqueName: \"kubernetes.io/projected/5b04fb53-39bc-4552-b7af-39e57a4102df-kube-api-access-vhqjw\") pod \"perses-operator-5bf474d74f-7gzrt\" (UID: \"5b04fb53-39bc-4552-b7af-39e57a4102df\") " pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.471476 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b04fb53-39bc-4552-b7af-39e57a4102df-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7gzrt\" (UID: \"5b04fb53-39bc-4552-b7af-39e57a4102df\") " pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.512806 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.567406 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.572313 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b04fb53-39bc-4552-b7af-39e57a4102df-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7gzrt\" (UID: \"5b04fb53-39bc-4552-b7af-39e57a4102df\") " pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.572625 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqjw\" (UniqueName: \"kubernetes.io/projected/5b04fb53-39bc-4552-b7af-39e57a4102df-kube-api-access-vhqjw\") pod \"perses-operator-5bf474d74f-7gzrt\" (UID: \"5b04fb53-39bc-4552-b7af-39e57a4102df\") " pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.573496 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b04fb53-39bc-4552-b7af-39e57a4102df-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7gzrt\" (UID: \"5b04fb53-39bc-4552-b7af-39e57a4102df\") " pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.592918 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqjw\" (UniqueName: \"kubernetes.io/projected/5b04fb53-39bc-4552-b7af-39e57a4102df-kube-api-access-vhqjw\") pod \"perses-operator-5bf474d74f-7gzrt\" (UID: \"5b04fb53-39bc-4552-b7af-39e57a4102df\") " pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.684730 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv"] Jan 26 14:21:28 crc kubenswrapper[4922]: W0126 14:21:28.704257 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode30b09af_aae4_4f17_ab60_25f6f3dca352.slice/crio-af18778fb5ca1ae0d86d86dd0f017894f3a7b9188f7fe53cdf499dbbe6b59ecf WatchSource:0}: Error finding container af18778fb5ca1ae0d86d86dd0f017894f3a7b9188f7fe53cdf499dbbe6b59ecf: Status 404 returned error can't find the container with id af18778fb5ca1ae0d86d86dd0f017894f3a7b9188f7fe53cdf499dbbe6b59ecf Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.713333 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9"] Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.716837 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:28 crc kubenswrapper[4922]: W0126 14:21:28.736800 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7289018a_d6ca_4075_b586_e180be982247.slice/crio-2239c62bb9babff567aa393f3228feb9d2646680dea138ed27992fd268fdeb8e WatchSource:0}: Error finding container 2239c62bb9babff567aa393f3228feb9d2646680dea138ed27992fd268fdeb8e: Status 404 returned error can't find the container with id 2239c62bb9babff567aa393f3228feb9d2646680dea138ed27992fd268fdeb8e Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.839694 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mct2h"] Jan 26 14:21:28 crc kubenswrapper[4922]: W0126 14:21:28.847046 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod157e0710_b880_4501_99ad_864b2f70cef5.slice/crio-fd4e9b7660cc5c46a4e8fb551c25b0b2a230d1d16c1ae984e0f0690759815b54 WatchSource:0}: Error finding container fd4e9b7660cc5c46a4e8fb551c25b0b2a230d1d16c1ae984e0f0690759815b54: Status 404 returned error can't find the container with id fd4e9b7660cc5c46a4e8fb551c25b0b2a230d1d16c1ae984e0f0690759815b54 Jan 26 14:21:28 crc kubenswrapper[4922]: I0126 14:21:28.961220 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7gzrt"] Jan 26 14:21:28 crc kubenswrapper[4922]: W0126 14:21:28.982989 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b04fb53_39bc_4552_b7af_39e57a4102df.slice/crio-4e91a675cd6e92a4502fbc97cafb31879dd007586b435bbd0fa043af3c1e45a2 WatchSource:0}: Error finding container 4e91a675cd6e92a4502fbc97cafb31879dd007586b435bbd0fa043af3c1e45a2: Status 404 returned error can't find the container with id 4e91a675cd6e92a4502fbc97cafb31879dd007586b435bbd0fa043af3c1e45a2 Jan 26 14:21:29 crc kubenswrapper[4922]: I0126 14:21:29.175759 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" event={"ID":"5b04fb53-39bc-4552-b7af-39e57a4102df","Type":"ContainerStarted","Data":"4e91a675cd6e92a4502fbc97cafb31879dd007586b435bbd0fa043af3c1e45a2"} Jan 26 14:21:29 crc kubenswrapper[4922]: I0126 14:21:29.176546 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" event={"ID":"e30b09af-aae4-4f17-ab60-25f6f3dca352","Type":"ContainerStarted","Data":"af18778fb5ca1ae0d86d86dd0f017894f3a7b9188f7fe53cdf499dbbe6b59ecf"} Jan 26 14:21:29 crc kubenswrapper[4922]: I0126 14:21:29.177471 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" event={"ID":"3249a43d-d843-43c3-b922-be437eabb548","Type":"ContainerStarted","Data":"e4caa3ffdb2b4ebfb10fed2ec87b6c6d2ec2b60747ed4d7b03b835e04d464746"} Jan 26 14:21:29 crc kubenswrapper[4922]: I0126 14:21:29.178301 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" event={"ID":"157e0710-b880-4501-99ad-864b2f70cef5","Type":"ContainerStarted","Data":"fd4e9b7660cc5c46a4e8fb551c25b0b2a230d1d16c1ae984e0f0690759815b54"} Jan 26 14:21:29 crc kubenswrapper[4922]: I0126 14:21:29.179042 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" event={"ID":"7289018a-d6ca-4075-b586-e180be982247","Type":"ContainerStarted","Data":"2239c62bb9babff567aa393f3228feb9d2646680dea138ed27992fd268fdeb8e"} Jan 26 14:21:35 crc kubenswrapper[4922]: I0126 14:21:35.214027 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" event={"ID":"5b04fb53-39bc-4552-b7af-39e57a4102df","Type":"ContainerStarted","Data":"7421cba9ede3bf74e5462f898e7148ed80d241803162c3d9082cf01fa1a947b0"} Jan 26 14:21:35 crc kubenswrapper[4922]: I0126 14:21:35.214733 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:21:35 crc kubenswrapper[4922]: I0126 14:21:35.215993 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" event={"ID":"e30b09af-aae4-4f17-ab60-25f6f3dca352","Type":"ContainerStarted","Data":"d53eaea560532bc0555bd582d227da1279f50952873577739c04c8dffb606905"} Jan 26 14:21:35 crc kubenswrapper[4922]: I0126 14:21:35.217473 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" event={"ID":"7289018a-d6ca-4075-b586-e180be982247","Type":"ContainerStarted","Data":"8d269e1a5aca78e860b290d1711ad522377597418cdfc9696643a119abf85547"} Jan 26 14:21:35 crc kubenswrapper[4922]: I0126 14:21:35.235943 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" podStartSLOduration=1.6478176439999999 podStartE2EDuration="7.235917768s" podCreationTimestamp="2026-01-26 14:21:28 +0000 UTC" firstStartedPulling="2026-01-26 14:21:28.985797386 +0000 UTC m=+706.188060148" lastFinishedPulling="2026-01-26 14:21:34.5738975 +0000 UTC m=+711.776160272" observedRunningTime="2026-01-26 14:21:35.231168958 +0000 UTC m=+712.433431750" watchObservedRunningTime="2026-01-26 14:21:35.235917768 +0000 UTC m=+712.438180570" Jan 26 14:21:35 crc kubenswrapper[4922]: I0126 14:21:35.261879 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9" podStartSLOduration=1.4341914980000001 podStartE2EDuration="7.26185566s" podCreationTimestamp="2026-01-26 14:21:28 +0000 UTC" firstStartedPulling="2026-01-26 14:21:28.739960707 +0000 UTC m=+705.942223479" lastFinishedPulling="2026-01-26 14:21:34.567624869 +0000 UTC m=+711.769887641" observedRunningTime="2026-01-26 14:21:35.259389522 +0000 UTC m=+712.461652314" watchObservedRunningTime="2026-01-26 14:21:35.26185566 +0000 UTC m=+712.464118442" Jan 26 14:21:40 crc kubenswrapper[4922]: I0126 14:21:40.256600 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" event={"ID":"3249a43d-d843-43c3-b922-be437eabb548","Type":"ContainerStarted","Data":"1499e1a1420dc553492d35fc9a96e48cbde9b122fd5fd1a91ef8875d467fa379"} Jan 26 14:21:40 crc kubenswrapper[4922]: I0126 14:21:40.277938 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv" podStartSLOduration=6.403080781 podStartE2EDuration="12.277911796s" podCreationTimestamp="2026-01-26 14:21:28 +0000 UTC" firstStartedPulling="2026-01-26 14:21:28.711933489 +0000 UTC m=+705.914196251" lastFinishedPulling="2026-01-26 14:21:34.586764504 +0000 UTC m=+711.789027266" observedRunningTime="2026-01-26 14:21:35.289730484 +0000 UTC m=+712.491993306" watchObservedRunningTime="2026-01-26 14:21:40.277911796 +0000 UTC m=+717.480174578" Jan 26 14:21:40 crc kubenswrapper[4922]: I0126 14:21:40.278624 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xgf5b" podStartSLOduration=2.070826115 podStartE2EDuration="13.278618445s" podCreationTimestamp="2026-01-26 14:21:27 +0000 UTC" firstStartedPulling="2026-01-26 14:21:28.537900208 +0000 UTC m=+705.740162980" lastFinishedPulling="2026-01-26 14:21:39.745692538 +0000 UTC m=+716.947955310" observedRunningTime="2026-01-26 14:21:40.271816474 +0000 UTC m=+717.474079246" watchObservedRunningTime="2026-01-26 14:21:40.278618445 +0000 UTC m=+717.480881217" Jan 26 14:21:41 crc kubenswrapper[4922]: I0126 14:21:41.315384 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:21:41 crc kubenswrapper[4922]: I0126 14:21:41.315465 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:21:43 crc kubenswrapper[4922]: I0126 14:21:43.277558 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" event={"ID":"157e0710-b880-4501-99ad-864b2f70cef5","Type":"ContainerStarted","Data":"3d211bc57462346551d0d3f974f0f91b2efcd30b92a425f49362b8daf94556b3"} Jan 26 14:21:43 crc kubenswrapper[4922]: I0126 14:21:43.278822 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:43 crc kubenswrapper[4922]: I0126 14:21:43.279795 4922 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mct2h container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": dial tcp 10.217.0.37:8081: connect: connection refused" start-of-body= Jan 26 14:21:43 crc kubenswrapper[4922]: I0126 14:21:43.279855 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" podUID="157e0710-b880-4501-99ad-864b2f70cef5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": dial tcp 10.217.0.37:8081: connect: connection refused" Jan 26 14:21:43 crc kubenswrapper[4922]: I0126 14:21:43.310751 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" podStartSLOduration=1.104317921 podStartE2EDuration="15.310726182s" podCreationTimestamp="2026-01-26 14:21:28 +0000 UTC" firstStartedPulling="2026-01-26 14:21:28.850765945 +0000 UTC m=+706.053028717" lastFinishedPulling="2026-01-26 14:21:43.057174186 +0000 UTC m=+720.259436978" observedRunningTime="2026-01-26 14:21:43.306609176 +0000 UTC m=+720.508871948" watchObservedRunningTime="2026-01-26 14:21:43.310726182 +0000 UTC m=+720.512988964" Jan 26 14:21:44 crc kubenswrapper[4922]: I0126 14:21:44.285584 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mct2h" Jan 26 14:21:48 crc kubenswrapper[4922]: I0126 14:21:48.722407 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7gzrt" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.325710 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx"] Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.328086 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.330398 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.342007 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx"] Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.412771 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.412852 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.413048 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k982l\" (UniqueName: \"kubernetes.io/projected/0965b8eb-299c-4245-a5e2-a695e6011131-kube-api-access-k982l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.514445 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k982l\" (UniqueName: \"kubernetes.io/projected/0965b8eb-299c-4245-a5e2-a695e6011131-kube-api-access-k982l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.514624 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.514656 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.515355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.515649 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.543450 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k982l\" (UniqueName: \"kubernetes.io/projected/0965b8eb-299c-4245-a5e2-a695e6011131-kube-api-access-k982l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.652040 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:07 crc kubenswrapper[4922]: I0126 14:22:07.936354 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx"] Jan 26 14:22:08 crc kubenswrapper[4922]: I0126 14:22:08.456989 4922 generic.go:334] "Generic (PLEG): container finished" podID="0965b8eb-299c-4245-a5e2-a695e6011131" containerID="a990be1b67440ca020b040da4a9bba3fc1e08a56e5064dff99b041346f5e0e05" exitCode=0 Jan 26 14:22:08 crc kubenswrapper[4922]: I0126 14:22:08.457126 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" event={"ID":"0965b8eb-299c-4245-a5e2-a695e6011131","Type":"ContainerDied","Data":"a990be1b67440ca020b040da4a9bba3fc1e08a56e5064dff99b041346f5e0e05"} Jan 26 14:22:08 crc kubenswrapper[4922]: I0126 14:22:08.457434 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" event={"ID":"0965b8eb-299c-4245-a5e2-a695e6011131","Type":"ContainerStarted","Data":"65372b0e3e9deb6a5d0d929a5d44e8a03e7a2b45cbd15804c2f32639b8baf0a5"} Jan 26 14:22:10 crc kubenswrapper[4922]: I0126 14:22:10.473049 4922 generic.go:334] "Generic (PLEG): container finished" podID="0965b8eb-299c-4245-a5e2-a695e6011131" containerID="e0629d8e1c7fe0eddb7148513cbed9ae7c74d1158d60729533c284e3a18df443" exitCode=0 Jan 26 14:22:10 crc kubenswrapper[4922]: I0126 14:22:10.473153 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" event={"ID":"0965b8eb-299c-4245-a5e2-a695e6011131","Type":"ContainerDied","Data":"e0629d8e1c7fe0eddb7148513cbed9ae7c74d1158d60729533c284e3a18df443"} Jan 26 14:22:11 crc kubenswrapper[4922]: I0126 14:22:11.307134 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:22:11 crc kubenswrapper[4922]: I0126 14:22:11.307228 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:22:11 crc kubenswrapper[4922]: I0126 14:22:11.483486 4922 generic.go:334] "Generic (PLEG): container finished" podID="0965b8eb-299c-4245-a5e2-a695e6011131" containerID="88813a7f2826926439075b668be0632ddbe2947d8cc791daac04bfbb4eb5f171" exitCode=0 Jan 26 14:22:11 crc kubenswrapper[4922]: I0126 14:22:11.483549 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" event={"ID":"0965b8eb-299c-4245-a5e2-a695e6011131","Type":"ContainerDied","Data":"88813a7f2826926439075b668be0632ddbe2947d8cc791daac04bfbb4eb5f171"} Jan 26 14:22:12 crc kubenswrapper[4922]: I0126 14:22:12.849818 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.001185 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k982l\" (UniqueName: \"kubernetes.io/projected/0965b8eb-299c-4245-a5e2-a695e6011131-kube-api-access-k982l\") pod \"0965b8eb-299c-4245-a5e2-a695e6011131\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.001493 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-util\") pod \"0965b8eb-299c-4245-a5e2-a695e6011131\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.001644 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-bundle\") pod \"0965b8eb-299c-4245-a5e2-a695e6011131\" (UID: \"0965b8eb-299c-4245-a5e2-a695e6011131\") " Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.002151 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-bundle" (OuterVolumeSpecName: "bundle") pod "0965b8eb-299c-4245-a5e2-a695e6011131" (UID: "0965b8eb-299c-4245-a5e2-a695e6011131"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.007477 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0965b8eb-299c-4245-a5e2-a695e6011131-kube-api-access-k982l" (OuterVolumeSpecName: "kube-api-access-k982l") pod "0965b8eb-299c-4245-a5e2-a695e6011131" (UID: "0965b8eb-299c-4245-a5e2-a695e6011131"). InnerVolumeSpecName "kube-api-access-k982l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.030583 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-util" (OuterVolumeSpecName: "util") pod "0965b8eb-299c-4245-a5e2-a695e6011131" (UID: "0965b8eb-299c-4245-a5e2-a695e6011131"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.081956 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vm22"] Jan 26 14:22:13 crc kubenswrapper[4922]: E0126 14:22:13.082393 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="pull" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.082422 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="pull" Jan 26 14:22:13 crc kubenswrapper[4922]: E0126 14:22:13.082457 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="util" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.082470 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="util" Jan 26 14:22:13 crc kubenswrapper[4922]: E0126 14:22:13.082497 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="extract" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.082512 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="extract" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.082704 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0965b8eb-299c-4245-a5e2-a695e6011131" containerName="extract" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.084213 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.087712 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vm22"] Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.103927 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmhd\" (UniqueName: \"kubernetes.io/projected/c3e2b925-54bc-4f52-b9dc-68a418c286d6-kube-api-access-wzmhd\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.104012 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-utilities\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.104147 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-catalog-content\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.104306 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k982l\" (UniqueName: \"kubernetes.io/projected/0965b8eb-299c-4245-a5e2-a695e6011131-kube-api-access-k982l\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.104335 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-util\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.104358 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0965b8eb-299c-4245-a5e2-a695e6011131-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.204883 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmhd\" (UniqueName: \"kubernetes.io/projected/c3e2b925-54bc-4f52-b9dc-68a418c286d6-kube-api-access-wzmhd\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.204932 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-utilities\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.204988 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-catalog-content\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.205464 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-catalog-content\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.205620 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-utilities\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.234870 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmhd\" (UniqueName: \"kubernetes.io/projected/c3e2b925-54bc-4f52-b9dc-68a418c286d6-kube-api-access-wzmhd\") pod \"redhat-operators-8vm22\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.439319 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.504663 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" event={"ID":"0965b8eb-299c-4245-a5e2-a695e6011131","Type":"ContainerDied","Data":"65372b0e3e9deb6a5d0d929a5d44e8a03e7a2b45cbd15804c2f32639b8baf0a5"} Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.504739 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65372b0e3e9deb6a5d0d929a5d44e8a03e7a2b45cbd15804c2f32639b8baf0a5" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.504759 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx" Jan 26 14:22:13 crc kubenswrapper[4922]: I0126 14:22:13.873257 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vm22"] Jan 26 14:22:13 crc kubenswrapper[4922]: W0126 14:22:13.903524 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e2b925_54bc_4f52_b9dc_68a418c286d6.slice/crio-7426cf86f20c2f6602f6e2c92a1ea8eec0760932c849a1e80bcf050fe7bf4ffc WatchSource:0}: Error finding container 7426cf86f20c2f6602f6e2c92a1ea8eec0760932c849a1e80bcf050fe7bf4ffc: Status 404 returned error can't find the container with id 7426cf86f20c2f6602f6e2c92a1ea8eec0760932c849a1e80bcf050fe7bf4ffc Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.136524 4922 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.512903 4922 generic.go:334] "Generic (PLEG): container finished" podID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerID="286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074" exitCode=0 Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.512971 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vm22" event={"ID":"c3e2b925-54bc-4f52-b9dc-68a418c286d6","Type":"ContainerDied","Data":"286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074"} Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.513004 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vm22" event={"ID":"c3e2b925-54bc-4f52-b9dc-68a418c286d6","Type":"ContainerStarted","Data":"7426cf86f20c2f6602f6e2c92a1ea8eec0760932c849a1e80bcf050fe7bf4ffc"} Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.960197 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7mfk9"] Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.961183 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.964089 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.964149 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9njdt" Jan 26 14:22:14 crc kubenswrapper[4922]: I0126 14:22:14.964548 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 14:22:15 crc kubenswrapper[4922]: I0126 14:22:15.014968 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7mfk9"] Jan 26 14:22:15 crc kubenswrapper[4922]: I0126 14:22:15.130338 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgwn\" (UniqueName: \"kubernetes.io/projected/95cb5278-d3ed-40e1-8d00-6dd6acbedd3d-kube-api-access-scgwn\") pod \"nmstate-operator-646758c888-7mfk9\" (UID: \"95cb5278-d3ed-40e1-8d00-6dd6acbedd3d\") " pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" Jan 26 14:22:15 crc kubenswrapper[4922]: I0126 14:22:15.231631 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scgwn\" (UniqueName: \"kubernetes.io/projected/95cb5278-d3ed-40e1-8d00-6dd6acbedd3d-kube-api-access-scgwn\") pod \"nmstate-operator-646758c888-7mfk9\" (UID: \"95cb5278-d3ed-40e1-8d00-6dd6acbedd3d\") " pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" Jan 26 14:22:15 crc kubenswrapper[4922]: I0126 14:22:15.263771 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scgwn\" (UniqueName: \"kubernetes.io/projected/95cb5278-d3ed-40e1-8d00-6dd6acbedd3d-kube-api-access-scgwn\") pod \"nmstate-operator-646758c888-7mfk9\" (UID: \"95cb5278-d3ed-40e1-8d00-6dd6acbedd3d\") " pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" Jan 26 14:22:15 crc kubenswrapper[4922]: I0126 14:22:15.276137 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" Jan 26 14:22:15 crc kubenswrapper[4922]: I0126 14:22:15.628534 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7mfk9"] Jan 26 14:22:16 crc kubenswrapper[4922]: I0126 14:22:16.532187 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" event={"ID":"95cb5278-d3ed-40e1-8d00-6dd6acbedd3d","Type":"ContainerStarted","Data":"8b40b57dfea3dfcc24975029a2b6d63045995f7a10bfb9fefc200f5b78760b0e"} Jan 26 14:22:16 crc kubenswrapper[4922]: I0126 14:22:16.534353 4922 generic.go:334] "Generic (PLEG): container finished" podID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerID="99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b" exitCode=0 Jan 26 14:22:16 crc kubenswrapper[4922]: I0126 14:22:16.534408 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vm22" event={"ID":"c3e2b925-54bc-4f52-b9dc-68a418c286d6","Type":"ContainerDied","Data":"99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b"} Jan 26 14:22:17 crc kubenswrapper[4922]: I0126 14:22:17.543090 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vm22" event={"ID":"c3e2b925-54bc-4f52-b9dc-68a418c286d6","Type":"ContainerStarted","Data":"83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a"} Jan 26 14:22:18 crc kubenswrapper[4922]: I0126 14:22:18.554150 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" event={"ID":"95cb5278-d3ed-40e1-8d00-6dd6acbedd3d","Type":"ContainerStarted","Data":"2464d712292bdb56e8422dfb2be0207821c96e82def90f7bde3c6ec9f9a27007"} Jan 26 14:22:18 crc kubenswrapper[4922]: I0126 14:22:18.576228 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vm22" podStartSLOduration=3.185926863 podStartE2EDuration="5.57620303s" podCreationTimestamp="2026-01-26 14:22:13 +0000 UTC" firstStartedPulling="2026-01-26 14:22:14.514716185 +0000 UTC m=+751.716978957" lastFinishedPulling="2026-01-26 14:22:16.904992352 +0000 UTC m=+754.107255124" observedRunningTime="2026-01-26 14:22:17.571542073 +0000 UTC m=+754.773804845" watchObservedRunningTime="2026-01-26 14:22:18.57620303 +0000 UTC m=+755.778465812" Jan 26 14:22:18 crc kubenswrapper[4922]: I0126 14:22:18.576597 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-7mfk9" podStartSLOduration=1.899459252 podStartE2EDuration="4.576592591s" podCreationTimestamp="2026-01-26 14:22:14 +0000 UTC" firstStartedPulling="2026-01-26 14:22:15.679305366 +0000 UTC m=+752.881568138" lastFinishedPulling="2026-01-26 14:22:18.356438695 +0000 UTC m=+755.558701477" observedRunningTime="2026-01-26 14:22:18.574637137 +0000 UTC m=+755.776899929" watchObservedRunningTime="2026-01-26 14:22:18.576592591 +0000 UTC m=+755.778855373" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.494120 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-99g5t"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.495621 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.497237 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dcv2n" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.508326 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.509191 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.511602 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.528588 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-99g5t"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.533018 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.540177 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-c6w6t"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.541170 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e02060f5-4687-4f14-9e1a-d94d855d5563-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-w7dvs\" (UID: \"e02060f5-4687-4f14-9e1a-d94d855d5563\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595660 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-dbus-socket\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595692 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-nmstate-lock\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595724 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466fb\" (UniqueName: \"kubernetes.io/projected/e02060f5-4687-4f14-9e1a-d94d855d5563-kube-api-access-466fb\") pod \"nmstate-webhook-8474b5b9d8-w7dvs\" (UID: \"e02060f5-4687-4f14-9e1a-d94d855d5563\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595757 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzj8\" (UniqueName: \"kubernetes.io/projected/ca3a7e5f-211d-40ef-bfb8-261b1af52cda-kube-api-access-qrzj8\") pod \"nmstate-metrics-54757c584b-99g5t\" (UID: \"ca3a7e5f-211d-40ef-bfb8-261b1af52cda\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595781 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlbz\" (UniqueName: \"kubernetes.io/projected/594847ad-6266-4357-a47a-aa6383207517-kube-api-access-xrlbz\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.595801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-ovs-socket\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.648354 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.649304 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.652883 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.652923 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.653219 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h2hgx" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.658440 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.696851 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-dbus-socket\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.696913 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p26k2\" (UniqueName: \"kubernetes.io/projected/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-kube-api-access-p26k2\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.696945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-nmstate-lock\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.696975 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-466fb\" (UniqueName: \"kubernetes.io/projected/e02060f5-4687-4f14-9e1a-d94d855d5563-kube-api-access-466fb\") pod \"nmstate-webhook-8474b5b9d8-w7dvs\" (UID: \"e02060f5-4687-4f14-9e1a-d94d855d5563\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697000 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697028 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrzj8\" (UniqueName: \"kubernetes.io/projected/ca3a7e5f-211d-40ef-bfb8-261b1af52cda-kube-api-access-qrzj8\") pod \"nmstate-metrics-54757c584b-99g5t\" (UID: \"ca3a7e5f-211d-40ef-bfb8-261b1af52cda\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697052 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlbz\" (UniqueName: \"kubernetes.io/projected/594847ad-6266-4357-a47a-aa6383207517-kube-api-access-xrlbz\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697091 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697115 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-ovs-socket\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697168 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e02060f5-4687-4f14-9e1a-d94d855d5563-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-w7dvs\" (UID: \"e02060f5-4687-4f14-9e1a-d94d855d5563\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697381 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-nmstate-lock\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-ovs-socket\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.697689 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/594847ad-6266-4357-a47a-aa6383207517-dbus-socket\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.706817 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e02060f5-4687-4f14-9e1a-d94d855d5563-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-w7dvs\" (UID: \"e02060f5-4687-4f14-9e1a-d94d855d5563\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.717479 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrzj8\" (UniqueName: \"kubernetes.io/projected/ca3a7e5f-211d-40ef-bfb8-261b1af52cda-kube-api-access-qrzj8\") pod \"nmstate-metrics-54757c584b-99g5t\" (UID: \"ca3a7e5f-211d-40ef-bfb8-261b1af52cda\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.718775 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlbz\" (UniqueName: \"kubernetes.io/projected/594847ad-6266-4357-a47a-aa6383207517-kube-api-access-xrlbz\") pod \"nmstate-handler-c6w6t\" (UID: \"594847ad-6266-4357-a47a-aa6383207517\") " pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.718946 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-466fb\" (UniqueName: \"kubernetes.io/projected/e02060f5-4687-4f14-9e1a-d94d855d5563-kube-api-access-466fb\") pod \"nmstate-webhook-8474b5b9d8-w7dvs\" (UID: \"e02060f5-4687-4f14-9e1a-d94d855d5563\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.797789 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.797850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.797933 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p26k2\" (UniqueName: \"kubernetes.io/projected/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-kube-api-access-p26k2\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.799147 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.810697 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.814308 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.822695 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p26k2\" (UniqueName: \"kubernetes.io/projected/fedbbbac-c62a-46aa-adfd-4bed0c5282fc-kube-api-access-p26k2\") pod \"nmstate-console-plugin-7754f76f8b-rhptb\" (UID: \"fedbbbac-c62a-46aa-adfd-4bed0c5282fc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.833047 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.837144 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7884bdc4dd-nhgzp"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.837914 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.850984 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7884bdc4dd-nhgzp"] Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.878500 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899122 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a93582d1-6c68-4045-bc02-4a363c919f31-console-oauth-config\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899170 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-trusted-ca-bundle\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899203 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mm9\" (UniqueName: \"kubernetes.io/projected/a93582d1-6c68-4045-bc02-4a363c919f31-kube-api-access-f5mm9\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899274 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a93582d1-6c68-4045-bc02-4a363c919f31-console-serving-cert\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899299 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-console-config\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899320 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-oauth-serving-cert\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.899343 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-service-ca\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:19 crc kubenswrapper[4922]: I0126 14:22:19.972614 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000730 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a93582d1-6c68-4045-bc02-4a363c919f31-console-serving-cert\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000787 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-console-config\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000818 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-oauth-serving-cert\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000844 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-service-ca\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000876 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a93582d1-6c68-4045-bc02-4a363c919f31-console-oauth-config\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000898 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-trusted-ca-bundle\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.000932 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mm9\" (UniqueName: \"kubernetes.io/projected/a93582d1-6c68-4045-bc02-4a363c919f31-kube-api-access-f5mm9\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.001831 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-oauth-serving-cert\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.004544 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-console-config\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.009140 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-service-ca\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.009467 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a93582d1-6c68-4045-bc02-4a363c919f31-trusted-ca-bundle\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.009471 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a93582d1-6c68-4045-bc02-4a363c919f31-console-serving-cert\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.010446 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a93582d1-6c68-4045-bc02-4a363c919f31-console-oauth-config\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.018786 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mm9\" (UniqueName: \"kubernetes.io/projected/a93582d1-6c68-4045-bc02-4a363c919f31-kube-api-access-f5mm9\") pod \"console-7884bdc4dd-nhgzp\" (UID: \"a93582d1-6c68-4045-bc02-4a363c919f31\") " pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.094621 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-99g5t"] Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.144767 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs"] Jan 26 14:22:20 crc kubenswrapper[4922]: W0126 14:22:20.147378 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode02060f5_4687_4f14_9e1a_d94d855d5563.slice/crio-f99bab5773e1f414790e36f7af2e7dd0d77b71ef7bb443601bd8f6357f0c3fc5 WatchSource:0}: Error finding container f99bab5773e1f414790e36f7af2e7dd0d77b71ef7bb443601bd8f6357f0c3fc5: Status 404 returned error can't find the container with id f99bab5773e1f414790e36f7af2e7dd0d77b71ef7bb443601bd8f6357f0c3fc5 Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.192114 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.419355 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb"] Jan 26 14:22:20 crc kubenswrapper[4922]: W0126 14:22:20.428735 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfedbbbac_c62a_46aa_adfd_4bed0c5282fc.slice/crio-7b7734b1e9b90a45128a9e6e086af05577109fc86d87276e74356924bf2eba76 WatchSource:0}: Error finding container 7b7734b1e9b90a45128a9e6e086af05577109fc86d87276e74356924bf2eba76: Status 404 returned error can't find the container with id 7b7734b1e9b90a45128a9e6e086af05577109fc86d87276e74356924bf2eba76 Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.579669 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" event={"ID":"e02060f5-4687-4f14-9e1a-d94d855d5563","Type":"ContainerStarted","Data":"f99bab5773e1f414790e36f7af2e7dd0d77b71ef7bb443601bd8f6357f0c3fc5"} Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.580714 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" event={"ID":"ca3a7e5f-211d-40ef-bfb8-261b1af52cda","Type":"ContainerStarted","Data":"ddf3b77ddb6d81a4fda9fb15e87924d20edbe20dc435d725dfee458611e99069"} Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.581867 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" event={"ID":"fedbbbac-c62a-46aa-adfd-4bed0c5282fc","Type":"ContainerStarted","Data":"7b7734b1e9b90a45128a9e6e086af05577109fc86d87276e74356924bf2eba76"} Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.582989 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c6w6t" event={"ID":"594847ad-6266-4357-a47a-aa6383207517","Type":"ContainerStarted","Data":"1b05b4aeb7a6fb9fad83faee1256cf4291acae392609de5356106be8af2b48cf"} Jan 26 14:22:20 crc kubenswrapper[4922]: I0126 14:22:20.592796 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7884bdc4dd-nhgzp"] Jan 26 14:22:20 crc kubenswrapper[4922]: W0126 14:22:20.598923 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93582d1_6c68_4045_bc02_4a363c919f31.slice/crio-8d2fa46700d32dc1b8cdc0056f5116ec959a1c695237e85e03ba69b6f358e9c6 WatchSource:0}: Error finding container 8d2fa46700d32dc1b8cdc0056f5116ec959a1c695237e85e03ba69b6f358e9c6: Status 404 returned error can't find the container with id 8d2fa46700d32dc1b8cdc0056f5116ec959a1c695237e85e03ba69b6f358e9c6 Jan 26 14:22:21 crc kubenswrapper[4922]: I0126 14:22:21.591617 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7884bdc4dd-nhgzp" event={"ID":"a93582d1-6c68-4045-bc02-4a363c919f31","Type":"ContainerStarted","Data":"1142b8a4071213f8317aa637d25033adab8ace6909a8dc4b9262c3ab3a999930"} Jan 26 14:22:21 crc kubenswrapper[4922]: I0126 14:22:21.591939 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7884bdc4dd-nhgzp" event={"ID":"a93582d1-6c68-4045-bc02-4a363c919f31","Type":"ContainerStarted","Data":"8d2fa46700d32dc1b8cdc0056f5116ec959a1c695237e85e03ba69b6f358e9c6"} Jan 26 14:22:21 crc kubenswrapper[4922]: I0126 14:22:21.608838 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7884bdc4dd-nhgzp" podStartSLOduration=2.608808448 podStartE2EDuration="2.608808448s" podCreationTimestamp="2026-01-26 14:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:22:21.605286982 +0000 UTC m=+758.807549774" watchObservedRunningTime="2026-01-26 14:22:21.608808448 +0000 UTC m=+758.811071220" Jan 26 14:22:23 crc kubenswrapper[4922]: I0126 14:22:23.439597 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:23 crc kubenswrapper[4922]: I0126 14:22:23.439878 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:24 crc kubenswrapper[4922]: I0126 14:22:24.483022 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vm22" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="registry-server" probeResult="failure" output=< Jan 26 14:22:24 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:22:24 crc kubenswrapper[4922]: > Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.621358 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" event={"ID":"fedbbbac-c62a-46aa-adfd-4bed0c5282fc","Type":"ContainerStarted","Data":"16cc4d15f52b110546ba669b2099b93ea2d0ae99fc3e98e0bda80b68e1fd094c"} Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.626130 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c6w6t" event={"ID":"594847ad-6266-4357-a47a-aa6383207517","Type":"ContainerStarted","Data":"055e5d2b9146cc685f7c1121bb04698a1c752974f3f993ae9eb624a7fba1735f"} Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.627319 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.628999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" event={"ID":"e02060f5-4687-4f14-9e1a-d94d855d5563","Type":"ContainerStarted","Data":"46d4da6bb4a1ff65f665993f9d1c77463023333f1513490b477c2c6e616e1b42"} Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.629562 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.631438 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" event={"ID":"ca3a7e5f-211d-40ef-bfb8-261b1af52cda","Type":"ContainerStarted","Data":"f16e0def2ce61021f3b26c80e6a1c36da268c2570dba4e299fc004713eae54f2"} Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.641013 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-rhptb" podStartSLOduration=2.422269178 podStartE2EDuration="6.640986514s" podCreationTimestamp="2026-01-26 14:22:19 +0000 UTC" firstStartedPulling="2026-01-26 14:22:20.431687595 +0000 UTC m=+757.633950367" lastFinishedPulling="2026-01-26 14:22:24.650404931 +0000 UTC m=+761.852667703" observedRunningTime="2026-01-26 14:22:25.637848339 +0000 UTC m=+762.840111161" watchObservedRunningTime="2026-01-26 14:22:25.640986514 +0000 UTC m=+762.843249296" Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.671305 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" podStartSLOduration=2.174321967 podStartE2EDuration="6.671269309s" podCreationTimestamp="2026-01-26 14:22:19 +0000 UTC" firstStartedPulling="2026-01-26 14:22:20.148958206 +0000 UTC m=+757.351220978" lastFinishedPulling="2026-01-26 14:22:24.645905548 +0000 UTC m=+761.848168320" observedRunningTime="2026-01-26 14:22:25.660166257 +0000 UTC m=+762.862429039" watchObservedRunningTime="2026-01-26 14:22:25.671269309 +0000 UTC m=+762.873532121" Jan 26 14:22:25 crc kubenswrapper[4922]: I0126 14:22:25.696453 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-c6w6t" podStartSLOduration=2.008199562 podStartE2EDuration="6.696413374s" podCreationTimestamp="2026-01-26 14:22:19 +0000 UTC" firstStartedPulling="2026-01-26 14:22:19.906526874 +0000 UTC m=+757.108789646" lastFinishedPulling="2026-01-26 14:22:24.594740686 +0000 UTC m=+761.797003458" observedRunningTime="2026-01-26 14:22:25.686255448 +0000 UTC m=+762.888518220" watchObservedRunningTime="2026-01-26 14:22:25.696413374 +0000 UTC m=+762.898676186" Jan 26 14:22:27 crc kubenswrapper[4922]: I0126 14:22:27.649492 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" event={"ID":"ca3a7e5f-211d-40ef-bfb8-261b1af52cda","Type":"ContainerStarted","Data":"1d48567e1fec0bef4c4bdec1eed52568aac6002f7785110e02edd19faabb9337"} Jan 26 14:22:27 crc kubenswrapper[4922]: I0126 14:22:27.675705 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-99g5t" podStartSLOduration=1.7255079150000001 podStartE2EDuration="8.67568599s" podCreationTimestamp="2026-01-26 14:22:19 +0000 UTC" firstStartedPulling="2026-01-26 14:22:20.112646087 +0000 UTC m=+757.314908859" lastFinishedPulling="2026-01-26 14:22:27.062824162 +0000 UTC m=+764.265086934" observedRunningTime="2026-01-26 14:22:27.673016398 +0000 UTC m=+764.875279170" watchObservedRunningTime="2026-01-26 14:22:27.67568599 +0000 UTC m=+764.877948762" Jan 26 14:22:29 crc kubenswrapper[4922]: I0126 14:22:29.921939 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-c6w6t" Jan 26 14:22:30 crc kubenswrapper[4922]: I0126 14:22:30.193115 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:30 crc kubenswrapper[4922]: I0126 14:22:30.193433 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:30 crc kubenswrapper[4922]: I0126 14:22:30.201751 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:30 crc kubenswrapper[4922]: I0126 14:22:30.679395 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7884bdc4dd-nhgzp" Jan 26 14:22:30 crc kubenswrapper[4922]: I0126 14:22:30.755472 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fd75n"] Jan 26 14:22:33 crc kubenswrapper[4922]: I0126 14:22:33.499685 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:33 crc kubenswrapper[4922]: I0126 14:22:33.585559 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:33 crc kubenswrapper[4922]: I0126 14:22:33.748896 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vm22"] Jan 26 14:22:34 crc kubenswrapper[4922]: I0126 14:22:34.701552 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vm22" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="registry-server" containerID="cri-o://83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a" gracePeriod=2 Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.268286 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.346243 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmhd\" (UniqueName: \"kubernetes.io/projected/c3e2b925-54bc-4f52-b9dc-68a418c286d6-kube-api-access-wzmhd\") pod \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.346372 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-catalog-content\") pod \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.346428 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-utilities\") pod \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\" (UID: \"c3e2b925-54bc-4f52-b9dc-68a418c286d6\") " Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.348169 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-utilities" (OuterVolumeSpecName: "utilities") pod "c3e2b925-54bc-4f52-b9dc-68a418c286d6" (UID: "c3e2b925-54bc-4f52-b9dc-68a418c286d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.359363 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e2b925-54bc-4f52-b9dc-68a418c286d6-kube-api-access-wzmhd" (OuterVolumeSpecName: "kube-api-access-wzmhd") pod "c3e2b925-54bc-4f52-b9dc-68a418c286d6" (UID: "c3e2b925-54bc-4f52-b9dc-68a418c286d6"). InnerVolumeSpecName "kube-api-access-wzmhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.447800 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmhd\" (UniqueName: \"kubernetes.io/projected/c3e2b925-54bc-4f52-b9dc-68a418c286d6-kube-api-access-wzmhd\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.447838 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.483691 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3e2b925-54bc-4f52-b9dc-68a418c286d6" (UID: "c3e2b925-54bc-4f52-b9dc-68a418c286d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.548750 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3e2b925-54bc-4f52-b9dc-68a418c286d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.723762 4922 generic.go:334] "Generic (PLEG): container finished" podID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerID="83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a" exitCode=0 Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.723837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vm22" event={"ID":"c3e2b925-54bc-4f52-b9dc-68a418c286d6","Type":"ContainerDied","Data":"83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a"} Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.723885 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vm22" event={"ID":"c3e2b925-54bc-4f52-b9dc-68a418c286d6","Type":"ContainerDied","Data":"7426cf86f20c2f6602f6e2c92a1ea8eec0760932c849a1e80bcf050fe7bf4ffc"} Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.723916 4922 scope.go:117] "RemoveContainer" containerID="83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.723960 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vm22" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.754790 4922 scope.go:117] "RemoveContainer" containerID="99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.803530 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vm22"] Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.811272 4922 scope.go:117] "RemoveContainer" containerID="286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.814408 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vm22"] Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.837879 4922 scope.go:117] "RemoveContainer" containerID="83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a" Jan 26 14:22:36 crc kubenswrapper[4922]: E0126 14:22:36.838690 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a\": container with ID starting with 83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a not found: ID does not exist" containerID="83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.838766 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a"} err="failed to get container status \"83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a\": rpc error: code = NotFound desc = could not find container \"83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a\": container with ID starting with 83175f82b0de85d5fd91b2d8a07ecaf6408d87ba2568dc183bdc036e66ade86a not found: ID does not exist" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.838815 4922 scope.go:117] "RemoveContainer" containerID="99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b" Jan 26 14:22:36 crc kubenswrapper[4922]: E0126 14:22:36.839405 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b\": container with ID starting with 99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b not found: ID does not exist" containerID="99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.839486 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b"} err="failed to get container status \"99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b\": rpc error: code = NotFound desc = could not find container \"99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b\": container with ID starting with 99e0c2003b7b51bbd903cc3f92c5145e41fc8ae3a798dbf171b9528fd7650c5b not found: ID does not exist" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.839535 4922 scope.go:117] "RemoveContainer" containerID="286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074" Jan 26 14:22:36 crc kubenswrapper[4922]: E0126 14:22:36.840163 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074\": container with ID starting with 286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074 not found: ID does not exist" containerID="286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074" Jan 26 14:22:36 crc kubenswrapper[4922]: I0126 14:22:36.840280 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074"} err="failed to get container status \"286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074\": rpc error: code = NotFound desc = could not find container \"286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074\": container with ID starting with 286205d8086a9d0921863d1803df7e976902a81aef1952650afdd275dff53074 not found: ID does not exist" Jan 26 14:22:37 crc kubenswrapper[4922]: I0126 14:22:37.104890 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" path="/var/lib/kubelet/pods/c3e2b925-54bc-4f52-b9dc-68a418c286d6/volumes" Jan 26 14:22:39 crc kubenswrapper[4922]: I0126 14:22:39.845890 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-w7dvs" Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.306792 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.307194 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.307268 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.308330 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4826d47a8978aad8de4d72d3370de97b0ca58dd44ff72fdfe2ad2319c73f0def"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.308485 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://4826d47a8978aad8de4d72d3370de97b0ca58dd44ff72fdfe2ad2319c73f0def" gracePeriod=600 Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.768284 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="4826d47a8978aad8de4d72d3370de97b0ca58dd44ff72fdfe2ad2319c73f0def" exitCode=0 Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.768390 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"4826d47a8978aad8de4d72d3370de97b0ca58dd44ff72fdfe2ad2319c73f0def"} Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.768945 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"38f65164faf1c2f39140b3ebf8dc530554515c361f23b474730bf8efbdde8f32"} Jan 26 14:22:41 crc kubenswrapper[4922]: I0126 14:22:41.768985 4922 scope.go:117] "RemoveContainer" containerID="e6cf20dd05518b0703e95fb22d716a156a551a6feacd1fad61c7d301c3595e35" Jan 26 14:22:55 crc kubenswrapper[4922]: I0126 14:22:55.808092 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fd75n" podUID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" containerName="console" containerID="cri-o://03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7" gracePeriod=15 Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.128493 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89"] Jan 26 14:22:56 crc kubenswrapper[4922]: E0126 14:22:56.128736 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="extract-content" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.128749 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="extract-content" Jan 26 14:22:56 crc kubenswrapper[4922]: E0126 14:22:56.128771 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="extract-utilities" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.128777 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="extract-utilities" Jan 26 14:22:56 crc kubenswrapper[4922]: E0126 14:22:56.128784 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="registry-server" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.128790 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="registry-server" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.128903 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e2b925-54bc-4f52-b9dc-68a418c286d6" containerName="registry-server" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.129716 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.132176 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.140192 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89"] Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.267976 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.268085 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.268381 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqbf\" (UniqueName: \"kubernetes.io/projected/799897f3-cce8-4769-8763-905e8e372ffb-kube-api-access-wpqbf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.370470 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqbf\" (UniqueName: \"kubernetes.io/projected/799897f3-cce8-4769-8763-905e8e372ffb-kube-api-access-wpqbf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.370569 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.370597 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.371081 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.371325 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.413968 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqbf\" (UniqueName: \"kubernetes.io/projected/799897f3-cce8-4769-8763-905e8e372ffb-kube-api-access-wpqbf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.453154 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.687753 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fd75n_69e8d6c7-f2f5-465e-93ee-e4eead1f58c4/console/0.log" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.688043 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.872542 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89"] Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.877734 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-service-ca\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.877794 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-config\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.877915 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-trusted-ca-bundle\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.877991 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-kube-api-access-cnqcq\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.878054 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-oauth-serving-cert\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.878124 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-oauth-config\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.878344 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-serving-cert\") pod \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\" (UID: \"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4\") " Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.879026 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-service-ca" (OuterVolumeSpecName: "service-ca") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.879040 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.879046 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-config" (OuterVolumeSpecName: "console-config") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.879426 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: W0126 14:22:56.881366 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799897f3_cce8_4769_8763_905e8e372ffb.slice/crio-58439440506a2505d5fbbd0b23c3b8e2529972b2a698af2020a3b6bba07f8dc8 WatchSource:0}: Error finding container 58439440506a2505d5fbbd0b23c3b8e2529972b2a698af2020a3b6bba07f8dc8: Status 404 returned error can't find the container with id 58439440506a2505d5fbbd0b23c3b8e2529972b2a698af2020a3b6bba07f8dc8 Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.883995 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-kube-api-access-cnqcq" (OuterVolumeSpecName: "kube-api-access-cnqcq") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "kube-api-access-cnqcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.884750 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.885144 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" (UID: "69e8d6c7-f2f5-465e-93ee-e4eead1f58c4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.905096 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" event={"ID":"799897f3-cce8-4769-8763-905e8e372ffb","Type":"ContainerStarted","Data":"58439440506a2505d5fbbd0b23c3b8e2529972b2a698af2020a3b6bba07f8dc8"} Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.910292 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fd75n_69e8d6c7-f2f5-465e-93ee-e4eead1f58c4/console/0.log" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.910324 4922 generic.go:334] "Generic (PLEG): container finished" podID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" containerID="03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7" exitCode=2 Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.910349 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fd75n" event={"ID":"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4","Type":"ContainerDied","Data":"03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7"} Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.910368 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fd75n" event={"ID":"69e8d6c7-f2f5-465e-93ee-e4eead1f58c4","Type":"ContainerDied","Data":"4cd16abca1a4a6cb4698b3068a4d9b08b896cf9adcfc96fa80544ddb46da015c"} Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.910385 4922 scope.go:117] "RemoveContainer" containerID="03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.910470 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fd75n" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980160 4922 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980456 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980469 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980484 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnqcq\" (UniqueName: \"kubernetes.io/projected/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-kube-api-access-cnqcq\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980496 4922 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980507 4922 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.980519 4922 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.984422 4922 scope.go:117] "RemoveContainer" containerID="03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7" Jan 26 14:22:56 crc kubenswrapper[4922]: E0126 14:22:56.984942 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7\": container with ID starting with 03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7 not found: ID does not exist" containerID="03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7" Jan 26 14:22:56 crc kubenswrapper[4922]: I0126 14:22:56.984996 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7"} err="failed to get container status \"03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7\": rpc error: code = NotFound desc = could not find container \"03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7\": container with ID starting with 03243c54bf7f422cecd21a098ef70b04fb38e5896584464d9d174fa0ce3432e7 not found: ID does not exist" Jan 26 14:22:57 crc kubenswrapper[4922]: I0126 14:22:57.003806 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fd75n"] Jan 26 14:22:57 crc kubenswrapper[4922]: I0126 14:22:57.010519 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fd75n"] Jan 26 14:22:57 crc kubenswrapper[4922]: I0126 14:22:57.104112 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" path="/var/lib/kubelet/pods/69e8d6c7-f2f5-465e-93ee-e4eead1f58c4/volumes" Jan 26 14:22:57 crc kubenswrapper[4922]: I0126 14:22:57.919591 4922 generic.go:334] "Generic (PLEG): container finished" podID="799897f3-cce8-4769-8763-905e8e372ffb" containerID="758ddb4b6272dc7d6b1300af78de500b3c8328989c2093c14a73b6c75cbee1d7" exitCode=0 Jan 26 14:22:57 crc kubenswrapper[4922]: I0126 14:22:57.919639 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" event={"ID":"799897f3-cce8-4769-8763-905e8e372ffb","Type":"ContainerDied","Data":"758ddb4b6272dc7d6b1300af78de500b3c8328989c2093c14a73b6c75cbee1d7"} Jan 26 14:23:00 crc kubenswrapper[4922]: I0126 14:23:00.946738 4922 generic.go:334] "Generic (PLEG): container finished" podID="799897f3-cce8-4769-8763-905e8e372ffb" containerID="329e64f4a5e8517927b151791b2a3bd8e96bd613c4bad3a35a18e737e030890c" exitCode=0 Jan 26 14:23:00 crc kubenswrapper[4922]: I0126 14:23:00.946878 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" event={"ID":"799897f3-cce8-4769-8763-905e8e372ffb","Type":"ContainerDied","Data":"329e64f4a5e8517927b151791b2a3bd8e96bd613c4bad3a35a18e737e030890c"} Jan 26 14:23:01 crc kubenswrapper[4922]: I0126 14:23:01.959320 4922 generic.go:334] "Generic (PLEG): container finished" podID="799897f3-cce8-4769-8763-905e8e372ffb" containerID="e533d40c9144aca11442a68786a264355fc20664ccdf9bd601a1dfc0ccadb5ba" exitCode=0 Jan 26 14:23:01 crc kubenswrapper[4922]: I0126 14:23:01.959414 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" event={"ID":"799897f3-cce8-4769-8763-905e8e372ffb","Type":"ContainerDied","Data":"e533d40c9144aca11442a68786a264355fc20664ccdf9bd601a1dfc0ccadb5ba"} Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.271337 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.375929 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpqbf\" (UniqueName: \"kubernetes.io/projected/799897f3-cce8-4769-8763-905e8e372ffb-kube-api-access-wpqbf\") pod \"799897f3-cce8-4769-8763-905e8e372ffb\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.375994 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-bundle\") pod \"799897f3-cce8-4769-8763-905e8e372ffb\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.376027 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-util\") pod \"799897f3-cce8-4769-8763-905e8e372ffb\" (UID: \"799897f3-cce8-4769-8763-905e8e372ffb\") " Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.377433 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-bundle" (OuterVolumeSpecName: "bundle") pod "799897f3-cce8-4769-8763-905e8e372ffb" (UID: "799897f3-cce8-4769-8763-905e8e372ffb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.384716 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799897f3-cce8-4769-8763-905e8e372ffb-kube-api-access-wpqbf" (OuterVolumeSpecName: "kube-api-access-wpqbf") pod "799897f3-cce8-4769-8763-905e8e372ffb" (UID: "799897f3-cce8-4769-8763-905e8e372ffb"). InnerVolumeSpecName "kube-api-access-wpqbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.398886 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-util" (OuterVolumeSpecName: "util") pod "799897f3-cce8-4769-8763-905e8e372ffb" (UID: "799897f3-cce8-4769-8763-905e8e372ffb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.477600 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.477642 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/799897f3-cce8-4769-8763-905e8e372ffb-util\") on node \"crc\" DevicePath \"\"" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.477662 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpqbf\" (UniqueName: \"kubernetes.io/projected/799897f3-cce8-4769-8763-905e8e372ffb-kube-api-access-wpqbf\") on node \"crc\" DevicePath \"\"" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.979525 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" event={"ID":"799897f3-cce8-4769-8763-905e8e372ffb","Type":"ContainerDied","Data":"58439440506a2505d5fbbd0b23c3b8e2529972b2a698af2020a3b6bba07f8dc8"} Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.979724 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58439440506a2505d5fbbd0b23c3b8e2529972b2a698af2020a3b6bba07f8dc8" Jan 26 14:23:03 crc kubenswrapper[4922]: I0126 14:23:03.979627 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.393465 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl"] Jan 26 14:23:14 crc kubenswrapper[4922]: E0126 14:23:14.394121 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" containerName="console" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394194 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" containerName="console" Jan 26 14:23:14 crc kubenswrapper[4922]: E0126 14:23:14.394213 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="extract" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394219 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="extract" Jan 26 14:23:14 crc kubenswrapper[4922]: E0126 14:23:14.394227 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="pull" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394233 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="pull" Jan 26 14:23:14 crc kubenswrapper[4922]: E0126 14:23:14.394242 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="util" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394248 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="util" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394340 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="799897f3-cce8-4769-8763-905e8e372ffb" containerName="extract" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394352 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e8d6c7-f2f5-465e-93ee-e4eead1f58c4" containerName="console" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.394752 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.398967 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.399524 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4446d" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.399822 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.399991 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.400145 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.405877 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl"] Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.562501 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74e0af8f-5a55-4376-912d-095cd5078f93-apiservice-cert\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.562566 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6rx\" (UniqueName: \"kubernetes.io/projected/74e0af8f-5a55-4376-912d-095cd5078f93-kube-api-access-9n6rx\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.562599 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74e0af8f-5a55-4376-912d-095cd5078f93-webhook-cert\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.626922 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89"] Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.627624 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.633015 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.633170 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vst5h" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.633015 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.653703 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89"] Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.663635 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74e0af8f-5a55-4376-912d-095cd5078f93-apiservice-cert\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.663926 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6rx\" (UniqueName: \"kubernetes.io/projected/74e0af8f-5a55-4376-912d-095cd5078f93-kube-api-access-9n6rx\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.664024 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74e0af8f-5a55-4376-912d-095cd5078f93-webhook-cert\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.673234 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74e0af8f-5a55-4376-912d-095cd5078f93-apiservice-cert\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.684758 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74e0af8f-5a55-4376-912d-095cd5078f93-webhook-cert\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.699262 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6rx\" (UniqueName: \"kubernetes.io/projected/74e0af8f-5a55-4376-912d-095cd5078f93-kube-api-access-9n6rx\") pod \"metallb-operator-controller-manager-6c599ccf7c-52gjl\" (UID: \"74e0af8f-5a55-4376-912d-095cd5078f93\") " pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.712931 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.765802 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7938fe0-7f27-49c6-959d-62405a4847f1-webhook-cert\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.766122 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phwx\" (UniqueName: \"kubernetes.io/projected/b7938fe0-7f27-49c6-959d-62405a4847f1-kube-api-access-6phwx\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.766151 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7938fe0-7f27-49c6-959d-62405a4847f1-apiservice-cert\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.866872 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7938fe0-7f27-49c6-959d-62405a4847f1-webhook-cert\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.866958 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phwx\" (UniqueName: \"kubernetes.io/projected/b7938fe0-7f27-49c6-959d-62405a4847f1-kube-api-access-6phwx\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.866982 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7938fe0-7f27-49c6-959d-62405a4847f1-apiservice-cert\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.871542 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7938fe0-7f27-49c6-959d-62405a4847f1-apiservice-cert\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.888715 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7938fe0-7f27-49c6-959d-62405a4847f1-webhook-cert\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.891580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phwx\" (UniqueName: \"kubernetes.io/projected/b7938fe0-7f27-49c6-959d-62405a4847f1-kube-api-access-6phwx\") pod \"metallb-operator-webhook-server-fc78bf7bd-42g89\" (UID: \"b7938fe0-7f27-49c6-959d-62405a4847f1\") " pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:14 crc kubenswrapper[4922]: I0126 14:23:14.947567 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:15 crc kubenswrapper[4922]: I0126 14:23:15.019429 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl"] Jan 26 14:23:15 crc kubenswrapper[4922]: W0126 14:23:15.028426 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e0af8f_5a55_4376_912d_095cd5078f93.slice/crio-ec266ecc2fe868f9f30c9226eae6a9626cb36dc1823fb843acaf8384ed9d9625 WatchSource:0}: Error finding container ec266ecc2fe868f9f30c9226eae6a9626cb36dc1823fb843acaf8384ed9d9625: Status 404 returned error can't find the container with id ec266ecc2fe868f9f30c9226eae6a9626cb36dc1823fb843acaf8384ed9d9625 Jan 26 14:23:15 crc kubenswrapper[4922]: I0126 14:23:15.057374 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" event={"ID":"74e0af8f-5a55-4376-912d-095cd5078f93","Type":"ContainerStarted","Data":"ec266ecc2fe868f9f30c9226eae6a9626cb36dc1823fb843acaf8384ed9d9625"} Jan 26 14:23:15 crc kubenswrapper[4922]: I0126 14:23:15.431029 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89"] Jan 26 14:23:16 crc kubenswrapper[4922]: I0126 14:23:16.067280 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" event={"ID":"b7938fe0-7f27-49c6-959d-62405a4847f1","Type":"ContainerStarted","Data":"7bd05ce536f3ec1f9f2a0ce51d4aff697a6b03cdb5b8b99d3a550e4d14aaecfc"} Jan 26 14:23:19 crc kubenswrapper[4922]: I0126 14:23:19.116304 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" event={"ID":"74e0af8f-5a55-4376-912d-095cd5078f93","Type":"ContainerStarted","Data":"db50f352f26aef281ed6157c411595c63c261025250eb649b726fcb8b0ccb2d2"} Jan 26 14:23:19 crc kubenswrapper[4922]: I0126 14:23:19.116905 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:19 crc kubenswrapper[4922]: I0126 14:23:19.139541 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" podStartSLOduration=2.01501915 podStartE2EDuration="5.139521743s" podCreationTimestamp="2026-01-26 14:23:14 +0000 UTC" firstStartedPulling="2026-01-26 14:23:15.031184346 +0000 UTC m=+812.233447118" lastFinishedPulling="2026-01-26 14:23:18.155686939 +0000 UTC m=+815.357949711" observedRunningTime="2026-01-26 14:23:19.131928358 +0000 UTC m=+816.334191130" watchObservedRunningTime="2026-01-26 14:23:19.139521743 +0000 UTC m=+816.341784515" Jan 26 14:23:22 crc kubenswrapper[4922]: I0126 14:23:22.136895 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" event={"ID":"b7938fe0-7f27-49c6-959d-62405a4847f1","Type":"ContainerStarted","Data":"61aaef23caf03890c22a71d24b8d31efa3ca8bc56cd74d7d187d1ed4065c301c"} Jan 26 14:23:22 crc kubenswrapper[4922]: I0126 14:23:22.137538 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:22 crc kubenswrapper[4922]: I0126 14:23:22.174573 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" podStartSLOduration=2.4355988330000002 podStartE2EDuration="8.174550207s" podCreationTimestamp="2026-01-26 14:23:14 +0000 UTC" firstStartedPulling="2026-01-26 14:23:15.439336271 +0000 UTC m=+812.641599033" lastFinishedPulling="2026-01-26 14:23:21.178287635 +0000 UTC m=+818.380550407" observedRunningTime="2026-01-26 14:23:22.170895294 +0000 UTC m=+819.373158106" watchObservedRunningTime="2026-01-26 14:23:22.174550207 +0000 UTC m=+819.376813009" Jan 26 14:23:34 crc kubenswrapper[4922]: I0126 14:23:34.954280 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fc78bf7bd-42g89" Jan 26 14:23:54 crc kubenswrapper[4922]: I0126 14:23:54.717375 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c599ccf7c-52gjl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.540876 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kkr8x"] Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.543438 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.548369 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.548489 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-js7dc" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.548527 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.569920 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477"] Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.570634 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.572877 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.588383 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477"] Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.644999 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-q7phl"] Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.645931 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.650818 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.651071 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.651360 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zfvj4" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.653216 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659589 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35b1371-974a-4a0f-b8a4-d7bf024090aa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gl477\" (UID: \"c35b1371-974a-4a0f-b8a4-d7bf024090aa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5aeb84a2-be84-4867-a141-e879208736c4-frr-startup\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659648 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-frr-sockets\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659665 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-frr-conf\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659723 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-metrics\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659741 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqn59\" (UniqueName: \"kubernetes.io/projected/5aeb84a2-be84-4867-a141-e879208736c4-kube-api-access-zqn59\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659767 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cw2b\" (UniqueName: \"kubernetes.io/projected/c35b1371-974a-4a0f-b8a4-d7bf024090aa-kube-api-access-8cw2b\") pod \"frr-k8s-webhook-server-7df86c4f6c-gl477\" (UID: \"c35b1371-974a-4a0f-b8a4-d7bf024090aa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659802 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-reloader\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.659842 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aeb84a2-be84-4867-a141-e879208736c4-metrics-certs\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.667873 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bkg52"] Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.669048 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.671523 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.683237 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bkg52"] Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.760672 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aeb84a2-be84-4867-a141-e879208736c4-metrics-certs\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.760760 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2a55e4e3-80c5-4e46-8916-5a306903ce70-metallb-excludel2\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.760795 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwjf\" (UniqueName: \"kubernetes.io/projected/bac401ac-4b21-403d-a9e0-808c69a6e0e6-kube-api-access-vkwjf\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.760822 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35b1371-974a-4a0f-b8a4-d7bf024090aa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gl477\" (UID: \"c35b1371-974a-4a0f-b8a4-d7bf024090aa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: E0126 14:23:55.760833 4922 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 14:23:55 crc kubenswrapper[4922]: E0126 14:23:55.760900 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aeb84a2-be84-4867-a141-e879208736c4-metrics-certs podName:5aeb84a2-be84-4867-a141-e879208736c4 nodeName:}" failed. No retries permitted until 2026-01-26 14:23:56.26087891 +0000 UTC m=+853.463141692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5aeb84a2-be84-4867-a141-e879208736c4-metrics-certs") pod "frr-k8s-kkr8x" (UID: "5aeb84a2-be84-4867-a141-e879208736c4") : secret "frr-k8s-certs-secret" not found Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.760845 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5aeb84a2-be84-4867-a141-e879208736c4-frr-startup\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761194 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-frr-sockets\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761214 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761230 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-frr-conf\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761258 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6bhx\" (UniqueName: \"kubernetes.io/projected/2a55e4e3-80c5-4e46-8916-5a306903ce70-kube-api-access-j6bhx\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761274 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-metrics\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761292 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac401ac-4b21-403d-a9e0-808c69a6e0e6-cert\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761307 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqn59\" (UniqueName: \"kubernetes.io/projected/5aeb84a2-be84-4867-a141-e879208736c4-kube-api-access-zqn59\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761322 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bac401ac-4b21-403d-a9e0-808c69a6e0e6-metrics-certs\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761349 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cw2b\" (UniqueName: \"kubernetes.io/projected/c35b1371-974a-4a0f-b8a4-d7bf024090aa-kube-api-access-8cw2b\") pod \"frr-k8s-webhook-server-7df86c4f6c-gl477\" (UID: \"c35b1371-974a-4a0f-b8a4-d7bf024090aa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761369 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-metrics-certs\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761386 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-reloader\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761675 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-reloader\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761684 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5aeb84a2-be84-4867-a141-e879208736c4-frr-startup\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.761853 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-frr-sockets\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.762046 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-frr-conf\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.762257 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5aeb84a2-be84-4867-a141-e879208736c4-metrics\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.771376 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35b1371-974a-4a0f-b8a4-d7bf024090aa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-gl477\" (UID: \"c35b1371-974a-4a0f-b8a4-d7bf024090aa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.783500 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cw2b\" (UniqueName: \"kubernetes.io/projected/c35b1371-974a-4a0f-b8a4-d7bf024090aa-kube-api-access-8cw2b\") pod \"frr-k8s-webhook-server-7df86c4f6c-gl477\" (UID: \"c35b1371-974a-4a0f-b8a4-d7bf024090aa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.784531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqn59\" (UniqueName: \"kubernetes.io/projected/5aeb84a2-be84-4867-a141-e879208736c4-kube-api-access-zqn59\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863300 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6bhx\" (UniqueName: \"kubernetes.io/projected/2a55e4e3-80c5-4e46-8916-5a306903ce70-kube-api-access-j6bhx\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863375 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac401ac-4b21-403d-a9e0-808c69a6e0e6-cert\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863416 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bac401ac-4b21-403d-a9e0-808c69a6e0e6-metrics-certs\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863485 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-metrics-certs\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863586 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2a55e4e3-80c5-4e46-8916-5a306903ce70-metallb-excludel2\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863623 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwjf\" (UniqueName: \"kubernetes.io/projected/bac401ac-4b21-403d-a9e0-808c69a6e0e6-kube-api-access-vkwjf\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.863673 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: E0126 14:23:55.863842 4922 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 14:23:55 crc kubenswrapper[4922]: E0126 14:23:55.863920 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist podName:2a55e4e3-80c5-4e46-8916-5a306903ce70 nodeName:}" failed. No retries permitted until 2026-01-26 14:23:56.363897534 +0000 UTC m=+853.566160326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist") pod "speaker-q7phl" (UID: "2a55e4e3-80c5-4e46-8916-5a306903ce70") : secret "metallb-memberlist" not found Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.864397 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2a55e4e3-80c5-4e46-8916-5a306903ce70-metallb-excludel2\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.865610 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.866989 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-metrics-certs\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.868545 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bac401ac-4b21-403d-a9e0-808c69a6e0e6-metrics-certs\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.878051 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac401ac-4b21-403d-a9e0-808c69a6e0e6-cert\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.882236 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.889466 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwjf\" (UniqueName: \"kubernetes.io/projected/bac401ac-4b21-403d-a9e0-808c69a6e0e6-kube-api-access-vkwjf\") pod \"controller-6968d8fdc4-bkg52\" (UID: \"bac401ac-4b21-403d-a9e0-808c69a6e0e6\") " pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.890781 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6bhx\" (UniqueName: \"kubernetes.io/projected/2a55e4e3-80c5-4e46-8916-5a306903ce70-kube-api-access-j6bhx\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:55 crc kubenswrapper[4922]: I0126 14:23:55.981625 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.099885 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477"] Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.265199 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bkg52"] Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.269981 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aeb84a2-be84-4867-a141-e879208736c4-metrics-certs\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.276473 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aeb84a2-be84-4867-a141-e879208736c4-metrics-certs\") pod \"frr-k8s-kkr8x\" (UID: \"5aeb84a2-be84-4867-a141-e879208736c4\") " pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.371563 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:56 crc kubenswrapper[4922]: E0126 14:23:56.371849 4922 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 14:23:56 crc kubenswrapper[4922]: E0126 14:23:56.372030 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist podName:2a55e4e3-80c5-4e46-8916-5a306903ce70 nodeName:}" failed. No retries permitted until 2026-01-26 14:23:57.37200281 +0000 UTC m=+854.574265592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist") pod "speaker-q7phl" (UID: "2a55e4e3-80c5-4e46-8916-5a306903ce70") : secret "metallb-memberlist" not found Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.419134 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bkg52" event={"ID":"bac401ac-4b21-403d-a9e0-808c69a6e0e6","Type":"ContainerStarted","Data":"0a67906e43bf5f28214218168c58739797fc1c42f764a857f9df5db682ed54dd"} Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.420447 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" event={"ID":"c35b1371-974a-4a0f-b8a4-d7bf024090aa","Type":"ContainerStarted","Data":"34215a33c6afd14329e9e49f7ff179710860b03275e3f9eaeb15a4809c702119"} Jan 26 14:23:56 crc kubenswrapper[4922]: I0126 14:23:56.462753 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.386494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.394774 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a55e4e3-80c5-4e46-8916-5a306903ce70-memberlist\") pod \"speaker-q7phl\" (UID: \"2a55e4e3-80c5-4e46-8916-5a306903ce70\") " pod="metallb-system/speaker-q7phl" Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.430335 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"c02f6625899c60d5e9d7a786b8d81a8faaded8711f3d1e2cc47e4977f3b35de7"} Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.433415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bkg52" event={"ID":"bac401ac-4b21-403d-a9e0-808c69a6e0e6","Type":"ContainerStarted","Data":"03f0926705a1bf324800b0e52e5aa7b40871f1d3fcf32c3347d85adb08e0b1ba"} Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.433460 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bkg52" event={"ID":"bac401ac-4b21-403d-a9e0-808c69a6e0e6","Type":"ContainerStarted","Data":"c9d29cc27a346ccb5947060b116b52a245834ba38fe3afc37b511b960e5afd20"} Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.434646 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.460534 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q7phl" Jan 26 14:23:57 crc kubenswrapper[4922]: I0126 14:23:57.469474 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bkg52" podStartSLOduration=2.469445665 podStartE2EDuration="2.469445665s" podCreationTimestamp="2026-01-26 14:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:23:57.463608607 +0000 UTC m=+854.665871429" watchObservedRunningTime="2026-01-26 14:23:57.469445665 +0000 UTC m=+854.671708467" Jan 26 14:23:57 crc kubenswrapper[4922]: W0126 14:23:57.512005 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a55e4e3_80c5_4e46_8916_5a306903ce70.slice/crio-c3430077e9490f89ae4dd92704063e105ff233354c66c680eed15e0fe64fd943 WatchSource:0}: Error finding container c3430077e9490f89ae4dd92704063e105ff233354c66c680eed15e0fe64fd943: Status 404 returned error can't find the container with id c3430077e9490f89ae4dd92704063e105ff233354c66c680eed15e0fe64fd943 Jan 26 14:23:58 crc kubenswrapper[4922]: I0126 14:23:58.448288 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q7phl" event={"ID":"2a55e4e3-80c5-4e46-8916-5a306903ce70","Type":"ContainerStarted","Data":"2415e69dfcb3af64492b9903344c7ecb98fa392e4f3de5111ec6bceb239138ed"} Jan 26 14:23:58 crc kubenswrapper[4922]: I0126 14:23:58.448510 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q7phl" event={"ID":"2a55e4e3-80c5-4e46-8916-5a306903ce70","Type":"ContainerStarted","Data":"6b34976d43fa984519d160bf04a1b702a4e52995975013aa75834793d54a2c2f"} Jan 26 14:23:58 crc kubenswrapper[4922]: I0126 14:23:58.448521 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q7phl" event={"ID":"2a55e4e3-80c5-4e46-8916-5a306903ce70","Type":"ContainerStarted","Data":"c3430077e9490f89ae4dd92704063e105ff233354c66c680eed15e0fe64fd943"} Jan 26 14:23:58 crc kubenswrapper[4922]: I0126 14:23:58.449027 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-q7phl" Jan 26 14:24:03 crc kubenswrapper[4922]: I0126 14:24:03.117856 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-q7phl" podStartSLOduration=8.117830503 podStartE2EDuration="8.117830503s" podCreationTimestamp="2026-01-26 14:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:23:58.468659097 +0000 UTC m=+855.670921869" watchObservedRunningTime="2026-01-26 14:24:03.117830503 +0000 UTC m=+860.320093315" Jan 26 14:24:04 crc kubenswrapper[4922]: I0126 14:24:04.494408 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" event={"ID":"c35b1371-974a-4a0f-b8a4-d7bf024090aa","Type":"ContainerStarted","Data":"eabebc5de63363f2f377afb679bebbb61b20469d7c596ff5d2c45a3df9c245d3"} Jan 26 14:24:04 crc kubenswrapper[4922]: I0126 14:24:04.494790 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:24:04 crc kubenswrapper[4922]: I0126 14:24:04.495909 4922 generic.go:334] "Generic (PLEG): container finished" podID="5aeb84a2-be84-4867-a141-e879208736c4" containerID="8d9042eb09e13984d0efe8a0d677768c47ae78a9ee77de01f33a7150fed0368d" exitCode=0 Jan 26 14:24:04 crc kubenswrapper[4922]: I0126 14:24:04.495946 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerDied","Data":"8d9042eb09e13984d0efe8a0d677768c47ae78a9ee77de01f33a7150fed0368d"} Jan 26 14:24:04 crc kubenswrapper[4922]: I0126 14:24:04.516698 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" podStartSLOduration=1.70565411 podStartE2EDuration="9.51668244s" podCreationTimestamp="2026-01-26 14:23:55 +0000 UTC" firstStartedPulling="2026-01-26 14:23:56.114274687 +0000 UTC m=+853.316537459" lastFinishedPulling="2026-01-26 14:24:03.925303017 +0000 UTC m=+861.127565789" observedRunningTime="2026-01-26 14:24:04.513543085 +0000 UTC m=+861.715805857" watchObservedRunningTime="2026-01-26 14:24:04.51668244 +0000 UTC m=+861.718945212" Jan 26 14:24:05 crc kubenswrapper[4922]: I0126 14:24:05.508778 4922 generic.go:334] "Generic (PLEG): container finished" podID="5aeb84a2-be84-4867-a141-e879208736c4" containerID="c99d42f9c3d065ee018aa91f781a2fdc206e78ace0c192adc7b6376dbda34434" exitCode=0 Jan 26 14:24:05 crc kubenswrapper[4922]: I0126 14:24:05.509140 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerDied","Data":"c99d42f9c3d065ee018aa91f781a2fdc206e78ace0c192adc7b6376dbda34434"} Jan 26 14:24:06 crc kubenswrapper[4922]: I0126 14:24:06.520450 4922 generic.go:334] "Generic (PLEG): container finished" podID="5aeb84a2-be84-4867-a141-e879208736c4" containerID="a854d0be6fb0e6635468f54e18c5c76f590c15a88228e2cdb7f476f80807d9b0" exitCode=0 Jan 26 14:24:06 crc kubenswrapper[4922]: I0126 14:24:06.520515 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerDied","Data":"a854d0be6fb0e6635468f54e18c5c76f590c15a88228e2cdb7f476f80807d9b0"} Jan 26 14:24:07 crc kubenswrapper[4922]: I0126 14:24:07.466015 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-q7phl" Jan 26 14:24:07 crc kubenswrapper[4922]: I0126 14:24:07.535771 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"f176e2165ebe8ba7feea0e6dcee71c06f9748987553371d364d0af46e301b6ea"} Jan 26 14:24:07 crc kubenswrapper[4922]: I0126 14:24:07.535862 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"fbd50c67bcf40d7ee6cdc202bc387d4e2ea1c5290a620f0eaf8fc0fc27de8708"} Jan 26 14:24:07 crc kubenswrapper[4922]: I0126 14:24:07.535881 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"d2162ac7734707c3acce299a711bc41856f74bdd02ad36501d4d0fc38f40ad3a"} Jan 26 14:24:07 crc kubenswrapper[4922]: I0126 14:24:07.535898 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"ad532d1a334989f40db086863db583522badeb31430a0e200918471206055e32"} Jan 26 14:24:08 crc kubenswrapper[4922]: I0126 14:24:08.552365 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"2bb2abb9f98fedc1c3be328de84d43e92e8c447c9acab5db3293324c50343519"} Jan 26 14:24:08 crc kubenswrapper[4922]: I0126 14:24:08.552815 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:24:08 crc kubenswrapper[4922]: I0126 14:24:08.552841 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kkr8x" event={"ID":"5aeb84a2-be84-4867-a141-e879208736c4","Type":"ContainerStarted","Data":"58a3fdaeb787ebef924627330737824fc86cc07940648495234dac55e3227611"} Jan 26 14:24:08 crc kubenswrapper[4922]: I0126 14:24:08.592178 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kkr8x" podStartSLOduration=6.205271749 podStartE2EDuration="13.592141055s" podCreationTimestamp="2026-01-26 14:23:55 +0000 UTC" firstStartedPulling="2026-01-26 14:23:56.585580073 +0000 UTC m=+853.787842845" lastFinishedPulling="2026-01-26 14:24:03.972449349 +0000 UTC m=+861.174712151" observedRunningTime="2026-01-26 14:24:08.583830229 +0000 UTC m=+865.786093041" watchObservedRunningTime="2026-01-26 14:24:08.592141055 +0000 UTC m=+865.794403907" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.336798 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bphfc"] Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.339645 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.346484 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bphfc"] Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.346713 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-k5xw5" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.346939 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.347049 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.398927 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqz4\" (UniqueName: \"kubernetes.io/projected/9e0c7ec9-054d-48c6-a4f2-f34488516894-kube-api-access-mnqz4\") pod \"openstack-operator-index-bphfc\" (UID: \"9e0c7ec9-054d-48c6-a4f2-f34488516894\") " pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.500650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqz4\" (UniqueName: \"kubernetes.io/projected/9e0c7ec9-054d-48c6-a4f2-f34488516894-kube-api-access-mnqz4\") pod \"openstack-operator-index-bphfc\" (UID: \"9e0c7ec9-054d-48c6-a4f2-f34488516894\") " pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.523795 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqz4\" (UniqueName: \"kubernetes.io/projected/9e0c7ec9-054d-48c6-a4f2-f34488516894-kube-api-access-mnqz4\") pod \"openstack-operator-index-bphfc\" (UID: \"9e0c7ec9-054d-48c6-a4f2-f34488516894\") " pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:10 crc kubenswrapper[4922]: I0126 14:24:10.699648 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:11 crc kubenswrapper[4922]: I0126 14:24:11.166907 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bphfc"] Jan 26 14:24:11 crc kubenswrapper[4922]: W0126 14:24:11.173861 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0c7ec9_054d_48c6_a4f2_f34488516894.slice/crio-99267a62c5e6326904a7da35a564f762d02ddeaf1ac0bb0a19045c4d27ed4c31 WatchSource:0}: Error finding container 99267a62c5e6326904a7da35a564f762d02ddeaf1ac0bb0a19045c4d27ed4c31: Status 404 returned error can't find the container with id 99267a62c5e6326904a7da35a564f762d02ddeaf1ac0bb0a19045c4d27ed4c31 Jan 26 14:24:11 crc kubenswrapper[4922]: I0126 14:24:11.464324 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:24:11 crc kubenswrapper[4922]: I0126 14:24:11.502163 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:24:11 crc kubenswrapper[4922]: I0126 14:24:11.575746 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bphfc" event={"ID":"9e0c7ec9-054d-48c6-a4f2-f34488516894","Type":"ContainerStarted","Data":"99267a62c5e6326904a7da35a564f762d02ddeaf1ac0bb0a19045c4d27ed4c31"} Jan 26 14:24:13 crc kubenswrapper[4922]: I0126 14:24:13.598461 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bphfc" event={"ID":"9e0c7ec9-054d-48c6-a4f2-f34488516894","Type":"ContainerStarted","Data":"42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94"} Jan 26 14:24:13 crc kubenswrapper[4922]: I0126 14:24:13.628840 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bphfc" podStartSLOduration=1.766764054 podStartE2EDuration="3.628811016s" podCreationTimestamp="2026-01-26 14:24:10 +0000 UTC" firstStartedPulling="2026-01-26 14:24:11.176961875 +0000 UTC m=+868.379224687" lastFinishedPulling="2026-01-26 14:24:13.039008867 +0000 UTC m=+870.241271649" observedRunningTime="2026-01-26 14:24:13.619913224 +0000 UTC m=+870.822176036" watchObservedRunningTime="2026-01-26 14:24:13.628811016 +0000 UTC m=+870.831073818" Jan 26 14:24:13 crc kubenswrapper[4922]: I0126 14:24:13.709390 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bphfc"] Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.313431 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v57b5"] Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.315604 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.322803 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v57b5"] Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.367531 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8pk4\" (UniqueName: \"kubernetes.io/projected/64ecb550-f2ab-4e01-88b7-e8059bd434ff-kube-api-access-z8pk4\") pod \"openstack-operator-index-v57b5\" (UID: \"64ecb550-f2ab-4e01-88b7-e8059bd434ff\") " pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.469850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8pk4\" (UniqueName: \"kubernetes.io/projected/64ecb550-f2ab-4e01-88b7-e8059bd434ff-kube-api-access-z8pk4\") pod \"openstack-operator-index-v57b5\" (UID: \"64ecb550-f2ab-4e01-88b7-e8059bd434ff\") " pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.496989 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8pk4\" (UniqueName: \"kubernetes.io/projected/64ecb550-f2ab-4e01-88b7-e8059bd434ff-kube-api-access-z8pk4\") pod \"openstack-operator-index-v57b5\" (UID: \"64ecb550-f2ab-4e01-88b7-e8059bd434ff\") " pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:14 crc kubenswrapper[4922]: I0126 14:24:14.646264 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.069292 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v57b5"] Jan 26 14:24:15 crc kubenswrapper[4922]: W0126 14:24:15.074405 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64ecb550_f2ab_4e01_88b7_e8059bd434ff.slice/crio-721e309efaa2cb5a32645ac9ce424326a8cd793c15d0d6d8bde67ed9aff0483b WatchSource:0}: Error finding container 721e309efaa2cb5a32645ac9ce424326a8cd793c15d0d6d8bde67ed9aff0483b: Status 404 returned error can't find the container with id 721e309efaa2cb5a32645ac9ce424326a8cd793c15d0d6d8bde67ed9aff0483b Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.617928 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v57b5" event={"ID":"64ecb550-f2ab-4e01-88b7-e8059bd434ff","Type":"ContainerStarted","Data":"d4001a0b2e0d0d5e736df3c7c1b8de4f2b6fe0a33dcf24cfba2f54ad236cbe61"} Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.618297 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v57b5" event={"ID":"64ecb550-f2ab-4e01-88b7-e8059bd434ff","Type":"ContainerStarted","Data":"721e309efaa2cb5a32645ac9ce424326a8cd793c15d0d6d8bde67ed9aff0483b"} Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.618055 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bphfc" podUID="9e0c7ec9-054d-48c6-a4f2-f34488516894" containerName="registry-server" containerID="cri-o://42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94" gracePeriod=2 Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.651462 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v57b5" podStartSLOduration=1.582966915 podStartE2EDuration="1.651424948s" podCreationTimestamp="2026-01-26 14:24:14 +0000 UTC" firstStartedPulling="2026-01-26 14:24:15.078563979 +0000 UTC m=+872.280826751" lastFinishedPulling="2026-01-26 14:24:15.147022012 +0000 UTC m=+872.349284784" observedRunningTime="2026-01-26 14:24:15.64009835 +0000 UTC m=+872.842361132" watchObservedRunningTime="2026-01-26 14:24:15.651424948 +0000 UTC m=+872.853687760" Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.889328 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-gl477" Jan 26 14:24:15 crc kubenswrapper[4922]: I0126 14:24:15.988621 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bkg52" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.061633 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.091847 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqz4\" (UniqueName: \"kubernetes.io/projected/9e0c7ec9-054d-48c6-a4f2-f34488516894-kube-api-access-mnqz4\") pod \"9e0c7ec9-054d-48c6-a4f2-f34488516894\" (UID: \"9e0c7ec9-054d-48c6-a4f2-f34488516894\") " Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.098583 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0c7ec9-054d-48c6-a4f2-f34488516894-kube-api-access-mnqz4" (OuterVolumeSpecName: "kube-api-access-mnqz4") pod "9e0c7ec9-054d-48c6-a4f2-f34488516894" (UID: "9e0c7ec9-054d-48c6-a4f2-f34488516894"). InnerVolumeSpecName "kube-api-access-mnqz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.196393 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqz4\" (UniqueName: \"kubernetes.io/projected/9e0c7ec9-054d-48c6-a4f2-f34488516894-kube-api-access-mnqz4\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.466269 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kkr8x" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.632507 4922 generic.go:334] "Generic (PLEG): container finished" podID="9e0c7ec9-054d-48c6-a4f2-f34488516894" containerID="42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94" exitCode=0 Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.632563 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bphfc" event={"ID":"9e0c7ec9-054d-48c6-a4f2-f34488516894","Type":"ContainerDied","Data":"42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94"} Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.632631 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bphfc" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.632667 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bphfc" event={"ID":"9e0c7ec9-054d-48c6-a4f2-f34488516894","Type":"ContainerDied","Data":"99267a62c5e6326904a7da35a564f762d02ddeaf1ac0bb0a19045c4d27ed4c31"} Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.632703 4922 scope.go:117] "RemoveContainer" containerID="42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.655466 4922 scope.go:117] "RemoveContainer" containerID="42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94" Jan 26 14:24:16 crc kubenswrapper[4922]: E0126 14:24:16.656648 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94\": container with ID starting with 42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94 not found: ID does not exist" containerID="42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.656701 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94"} err="failed to get container status \"42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94\": rpc error: code = NotFound desc = could not find container \"42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94\": container with ID starting with 42e8d4f372cf6a0dd4db99ed04ffc0c2ab6c224d19e46b1cb8e6239bc9bf7f94 not found: ID does not exist" Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.673352 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bphfc"] Jan 26 14:24:16 crc kubenswrapper[4922]: I0126 14:24:16.685555 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bphfc"] Jan 26 14:24:17 crc kubenswrapper[4922]: I0126 14:24:17.107249 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0c7ec9-054d-48c6-a4f2-f34488516894" path="/var/lib/kubelet/pods/9e0c7ec9-054d-48c6-a4f2-f34488516894/volumes" Jan 26 14:24:24 crc kubenswrapper[4922]: I0126 14:24:24.647313 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:24 crc kubenswrapper[4922]: I0126 14:24:24.647999 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:24 crc kubenswrapper[4922]: I0126 14:24:24.673872 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:24 crc kubenswrapper[4922]: I0126 14:24:24.726250 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v57b5" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.561470 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j"] Jan 26 14:24:27 crc kubenswrapper[4922]: E0126 14:24:27.562009 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0c7ec9-054d-48c6-a4f2-f34488516894" containerName="registry-server" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.562024 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0c7ec9-054d-48c6-a4f2-f34488516894" containerName="registry-server" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.562180 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0c7ec9-054d-48c6-a4f2-f34488516894" containerName="registry-server" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.563217 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.565551 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-k825z" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.611344 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j"] Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.653130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwvj\" (UniqueName: \"kubernetes.io/projected/d84c8528-9f70-476f-a622-90992fd49e69-kube-api-access-zwwvj\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.653224 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-util\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.653265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-bundle\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.754295 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-util\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.754388 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-bundle\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.754469 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwwvj\" (UniqueName: \"kubernetes.io/projected/d84c8528-9f70-476f-a622-90992fd49e69-kube-api-access-zwwvj\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.755137 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-bundle\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.755139 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-util\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.787585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwwvj\" (UniqueName: \"kubernetes.io/projected/d84c8528-9f70-476f-a622-90992fd49e69-kube-api-access-zwwvj\") pod \"16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:27 crc kubenswrapper[4922]: I0126 14:24:27.879868 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.323563 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gw7wc"] Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.325936 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.343044 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw7wc"] Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.365542 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-catalog-content\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.365603 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl2rw\" (UniqueName: \"kubernetes.io/projected/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-kube-api-access-gl2rw\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.365632 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-utilities\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.389800 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j"] Jan 26 14:24:28 crc kubenswrapper[4922]: W0126 14:24:28.403257 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84c8528_9f70_476f_a622_90992fd49e69.slice/crio-fa3cd9fe42a95adbd532db8b9268937d96c8b534ee3b16b71a3bf98b98d96c2a WatchSource:0}: Error finding container fa3cd9fe42a95adbd532db8b9268937d96c8b534ee3b16b71a3bf98b98d96c2a: Status 404 returned error can't find the container with id fa3cd9fe42a95adbd532db8b9268937d96c8b534ee3b16b71a3bf98b98d96c2a Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.467444 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-catalog-content\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.467530 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl2rw\" (UniqueName: \"kubernetes.io/projected/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-kube-api-access-gl2rw\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.467559 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-utilities\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.468191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-catalog-content\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.468233 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-utilities\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.487521 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl2rw\" (UniqueName: \"kubernetes.io/projected/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-kube-api-access-gl2rw\") pod \"redhat-marketplace-gw7wc\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.654279 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.729244 4922 generic.go:334] "Generic (PLEG): container finished" podID="d84c8528-9f70-476f-a622-90992fd49e69" containerID="8ca7625a5ef8fd70027505e6d4dbdb73e0b6d4d1a17c7e4d776a418121e80898" exitCode=0 Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.729843 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" event={"ID":"d84c8528-9f70-476f-a622-90992fd49e69","Type":"ContainerDied","Data":"8ca7625a5ef8fd70027505e6d4dbdb73e0b6d4d1a17c7e4d776a418121e80898"} Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.729912 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" event={"ID":"d84c8528-9f70-476f-a622-90992fd49e69","Type":"ContainerStarted","Data":"fa3cd9fe42a95adbd532db8b9268937d96c8b534ee3b16b71a3bf98b98d96c2a"} Jan 26 14:24:28 crc kubenswrapper[4922]: I0126 14:24:28.876426 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw7wc"] Jan 26 14:24:28 crc kubenswrapper[4922]: W0126 14:24:28.946505 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4343c0_eb5d_49cc_b749_8dd3fc44a963.slice/crio-9843130f20e788bf5e448e74997476a7365be19e2636ddff7fc2602c90f6a03e WatchSource:0}: Error finding container 9843130f20e788bf5e448e74997476a7365be19e2636ddff7fc2602c90f6a03e: Status 404 returned error can't find the container with id 9843130f20e788bf5e448e74997476a7365be19e2636ddff7fc2602c90f6a03e Jan 26 14:24:29 crc kubenswrapper[4922]: I0126 14:24:29.740779 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" event={"ID":"d84c8528-9f70-476f-a622-90992fd49e69","Type":"ContainerDied","Data":"386d03b19301d7781707156af13e4e03b5ec2182fda0c06fcef4006a2216fa39"} Jan 26 14:24:29 crc kubenswrapper[4922]: I0126 14:24:29.740652 4922 generic.go:334] "Generic (PLEG): container finished" podID="d84c8528-9f70-476f-a622-90992fd49e69" containerID="386d03b19301d7781707156af13e4e03b5ec2182fda0c06fcef4006a2216fa39" exitCode=0 Jan 26 14:24:29 crc kubenswrapper[4922]: I0126 14:24:29.751410 4922 generic.go:334] "Generic (PLEG): container finished" podID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerID="a33dbf4070fc27bcd7394e7572dbec94091e01f58425bbb4d43cfa653ac1ac8e" exitCode=0 Jan 26 14:24:29 crc kubenswrapper[4922]: I0126 14:24:29.751542 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerDied","Data":"a33dbf4070fc27bcd7394e7572dbec94091e01f58425bbb4d43cfa653ac1ac8e"} Jan 26 14:24:29 crc kubenswrapper[4922]: I0126 14:24:29.751941 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerStarted","Data":"9843130f20e788bf5e448e74997476a7365be19e2636ddff7fc2602c90f6a03e"} Jan 26 14:24:31 crc kubenswrapper[4922]: I0126 14:24:31.775992 4922 generic.go:334] "Generic (PLEG): container finished" podID="d84c8528-9f70-476f-a622-90992fd49e69" containerID="ebc7c98374030b52abb451015280a1812f334faafaa9498625b6e79d69a937d8" exitCode=0 Jan 26 14:24:31 crc kubenswrapper[4922]: I0126 14:24:31.776038 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" event={"ID":"d84c8528-9f70-476f-a622-90992fd49e69","Type":"ContainerDied","Data":"ebc7c98374030b52abb451015280a1812f334faafaa9498625b6e79d69a937d8"} Jan 26 14:24:31 crc kubenswrapper[4922]: I0126 14:24:31.780476 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerStarted","Data":"25c6e2a5f6f57a7b2a0c224cb20ea498d630b45a951969e65468555b49da2ed0"} Jan 26 14:24:32 crc kubenswrapper[4922]: I0126 14:24:32.789920 4922 generic.go:334] "Generic (PLEG): container finished" podID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerID="25c6e2a5f6f57a7b2a0c224cb20ea498d630b45a951969e65468555b49da2ed0" exitCode=0 Jan 26 14:24:32 crc kubenswrapper[4922]: I0126 14:24:32.790447 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerDied","Data":"25c6e2a5f6f57a7b2a0c224cb20ea498d630b45a951969e65468555b49da2ed0"} Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.079438 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.246288 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-util\") pod \"d84c8528-9f70-476f-a622-90992fd49e69\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.246602 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-bundle\") pod \"d84c8528-9f70-476f-a622-90992fd49e69\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.246635 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwwvj\" (UniqueName: \"kubernetes.io/projected/d84c8528-9f70-476f-a622-90992fd49e69-kube-api-access-zwwvj\") pod \"d84c8528-9f70-476f-a622-90992fd49e69\" (UID: \"d84c8528-9f70-476f-a622-90992fd49e69\") " Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.247254 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-bundle" (OuterVolumeSpecName: "bundle") pod "d84c8528-9f70-476f-a622-90992fd49e69" (UID: "d84c8528-9f70-476f-a622-90992fd49e69"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.256253 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84c8528-9f70-476f-a622-90992fd49e69-kube-api-access-zwwvj" (OuterVolumeSpecName: "kube-api-access-zwwvj") pod "d84c8528-9f70-476f-a622-90992fd49e69" (UID: "d84c8528-9f70-476f-a622-90992fd49e69"). InnerVolumeSpecName "kube-api-access-zwwvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.278695 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-util" (OuterVolumeSpecName: "util") pod "d84c8528-9f70-476f-a622-90992fd49e69" (UID: "d84c8528-9f70-476f-a622-90992fd49e69"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.348241 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-util\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.348305 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d84c8528-9f70-476f-a622-90992fd49e69-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.348320 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwwvj\" (UniqueName: \"kubernetes.io/projected/d84c8528-9f70-476f-a622-90992fd49e69-kube-api-access-zwwvj\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.797990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" event={"ID":"d84c8528-9f70-476f-a622-90992fd49e69","Type":"ContainerDied","Data":"fa3cd9fe42a95adbd532db8b9268937d96c8b534ee3b16b71a3bf98b98d96c2a"} Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.798030 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa3cd9fe42a95adbd532db8b9268937d96c8b534ee3b16b71a3bf98b98d96c2a" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.798008 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j" Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.799471 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerStarted","Data":"42ff62e34200fa88f53be8eddc06e83b1cc217926cac843efa90cc818a68d507"} Jan 26 14:24:33 crc kubenswrapper[4922]: I0126 14:24:33.819509 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gw7wc" podStartSLOduration=2.399125745 podStartE2EDuration="5.819493413s" podCreationTimestamp="2026-01-26 14:24:28 +0000 UTC" firstStartedPulling="2026-01-26 14:24:29.75952409 +0000 UTC m=+886.961786902" lastFinishedPulling="2026-01-26 14:24:33.179891788 +0000 UTC m=+890.382154570" observedRunningTime="2026-01-26 14:24:33.816988865 +0000 UTC m=+891.019251637" watchObservedRunningTime="2026-01-26 14:24:33.819493413 +0000 UTC m=+891.021756185" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.181508 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg"] Jan 26 14:24:38 crc kubenswrapper[4922]: E0126 14:24:38.182520 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="util" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.182537 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="util" Jan 26 14:24:38 crc kubenswrapper[4922]: E0126 14:24:38.182556 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="extract" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.182564 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="extract" Jan 26 14:24:38 crc kubenswrapper[4922]: E0126 14:24:38.182577 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="pull" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.182585 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="pull" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.182731 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84c8528-9f70-476f-a622-90992fd49e69" containerName="extract" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.183285 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.186536 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-ws7g4" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.211813 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg"] Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.233042 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf46c\" (UniqueName: \"kubernetes.io/projected/5857d460-cbd7-4dba-b280-e791678bc021-kube-api-access-rf46c\") pod \"openstack-operator-controller-init-c7fd5fdf7-4dsfg\" (UID: \"5857d460-cbd7-4dba-b280-e791678bc021\") " pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.334246 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf46c\" (UniqueName: \"kubernetes.io/projected/5857d460-cbd7-4dba-b280-e791678bc021-kube-api-access-rf46c\") pod \"openstack-operator-controller-init-c7fd5fdf7-4dsfg\" (UID: \"5857d460-cbd7-4dba-b280-e791678bc021\") " pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.368210 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf46c\" (UniqueName: \"kubernetes.io/projected/5857d460-cbd7-4dba-b280-e791678bc021-kube-api-access-rf46c\") pod \"openstack-operator-controller-init-c7fd5fdf7-4dsfg\" (UID: \"5857d460-cbd7-4dba-b280-e791678bc021\") " pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.506218 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.656430 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.657621 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.737208 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.849267 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg"] Jan 26 14:24:38 crc kubenswrapper[4922]: W0126 14:24:38.872279 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5857d460_cbd7_4dba_b280_e791678bc021.slice/crio-8dccd587fdc737c8ae4609e9f82101f8482854a8eb0e46cf29b94f470db31a2d WatchSource:0}: Error finding container 8dccd587fdc737c8ae4609e9f82101f8482854a8eb0e46cf29b94f470db31a2d: Status 404 returned error can't find the container with id 8dccd587fdc737c8ae4609e9f82101f8482854a8eb0e46cf29b94f470db31a2d Jan 26 14:24:38 crc kubenswrapper[4922]: I0126 14:24:38.893172 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:39 crc kubenswrapper[4922]: I0126 14:24:39.837586 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" event={"ID":"5857d460-cbd7-4dba-b280-e791678bc021","Type":"ContainerStarted","Data":"8dccd587fdc737c8ae4609e9f82101f8482854a8eb0e46cf29b94f470db31a2d"} Jan 26 14:24:40 crc kubenswrapper[4922]: I0126 14:24:40.713985 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw7wc"] Jan 26 14:24:41 crc kubenswrapper[4922]: I0126 14:24:41.306503 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:24:41 crc kubenswrapper[4922]: I0126 14:24:41.306553 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:24:41 crc kubenswrapper[4922]: I0126 14:24:41.867197 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gw7wc" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="registry-server" containerID="cri-o://42ff62e34200fa88f53be8eddc06e83b1cc217926cac843efa90cc818a68d507" gracePeriod=2 Jan 26 14:24:42 crc kubenswrapper[4922]: I0126 14:24:42.877397 4922 generic.go:334] "Generic (PLEG): container finished" podID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerID="42ff62e34200fa88f53be8eddc06e83b1cc217926cac843efa90cc818a68d507" exitCode=0 Jan 26 14:24:42 crc kubenswrapper[4922]: I0126 14:24:42.877454 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerDied","Data":"42ff62e34200fa88f53be8eddc06e83b1cc217926cac843efa90cc818a68d507"} Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.124633 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.307924 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-utilities\") pod \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.308055 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-catalog-content\") pod \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.308140 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl2rw\" (UniqueName: \"kubernetes.io/projected/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-kube-api-access-gl2rw\") pod \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\" (UID: \"6c4343c0-eb5d-49cc-b749-8dd3fc44a963\") " Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.308878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-utilities" (OuterVolumeSpecName: "utilities") pod "6c4343c0-eb5d-49cc-b749-8dd3fc44a963" (UID: "6c4343c0-eb5d-49cc-b749-8dd3fc44a963"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.313927 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-kube-api-access-gl2rw" (OuterVolumeSpecName: "kube-api-access-gl2rw") pod "6c4343c0-eb5d-49cc-b749-8dd3fc44a963" (UID: "6c4343c0-eb5d-49cc-b749-8dd3fc44a963"). InnerVolumeSpecName "kube-api-access-gl2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.334816 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c4343c0-eb5d-49cc-b749-8dd3fc44a963" (UID: "6c4343c0-eb5d-49cc-b749-8dd3fc44a963"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.409368 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.409397 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl2rw\" (UniqueName: \"kubernetes.io/projected/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-kube-api-access-gl2rw\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.409408 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4343c0-eb5d-49cc-b749-8dd3fc44a963-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.887003 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" event={"ID":"5857d460-cbd7-4dba-b280-e791678bc021","Type":"ContainerStarted","Data":"a2fc2180bbdd1893fc2f18ff1267f52fef5cedaaf5df8035d952ddd6c20a2f08"} Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.887985 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.891021 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw7wc" event={"ID":"6c4343c0-eb5d-49cc-b749-8dd3fc44a963","Type":"ContainerDied","Data":"9843130f20e788bf5e448e74997476a7365be19e2636ddff7fc2602c90f6a03e"} Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.891131 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw7wc" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.891151 4922 scope.go:117] "RemoveContainer" containerID="42ff62e34200fa88f53be8eddc06e83b1cc217926cac843efa90cc818a68d507" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.928934 4922 scope.go:117] "RemoveContainer" containerID="25c6e2a5f6f57a7b2a0c224cb20ea498d630b45a951969e65468555b49da2ed0" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.957552 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" podStartSLOduration=1.844572503 podStartE2EDuration="5.95718213s" podCreationTimestamp="2026-01-26 14:24:38 +0000 UTC" firstStartedPulling="2026-01-26 14:24:38.875478381 +0000 UTC m=+896.077741153" lastFinishedPulling="2026-01-26 14:24:42.988087988 +0000 UTC m=+900.190350780" observedRunningTime="2026-01-26 14:24:43.944185006 +0000 UTC m=+901.146447828" watchObservedRunningTime="2026-01-26 14:24:43.95718213 +0000 UTC m=+901.159444932" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.984297 4922 scope.go:117] "RemoveContainer" containerID="a33dbf4070fc27bcd7394e7572dbec94091e01f58425bbb4d43cfa653ac1ac8e" Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.991537 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw7wc"] Jan 26 14:24:43 crc kubenswrapper[4922]: I0126 14:24:43.998631 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw7wc"] Jan 26 14:24:45 crc kubenswrapper[4922]: I0126 14:24:45.106139 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" path="/var/lib/kubelet/pods/6c4343c0-eb5d-49cc-b749-8dd3fc44a963/volumes" Jan 26 14:24:48 crc kubenswrapper[4922]: I0126 14:24:48.508794 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-c7fd5fdf7-4dsfg" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.128628 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sw6ch"] Jan 26 14:24:54 crc kubenswrapper[4922]: E0126 14:24:54.130437 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="extract-utilities" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.130525 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="extract-utilities" Jan 26 14:24:54 crc kubenswrapper[4922]: E0126 14:24:54.130608 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="registry-server" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.130672 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="registry-server" Jan 26 14:24:54 crc kubenswrapper[4922]: E0126 14:24:54.130752 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="extract-content" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.130829 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="extract-content" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.131039 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4343c0-eb5d-49cc-b749-8dd3fc44a963" containerName="registry-server" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.132159 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.142427 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw6ch"] Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.265672 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-catalog-content\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.266127 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfq7\" (UniqueName: \"kubernetes.io/projected/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-kube-api-access-qpfq7\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.266296 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-utilities\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.368007 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-utilities\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.368054 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-catalog-content\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.368092 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfq7\" (UniqueName: \"kubernetes.io/projected/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-kube-api-access-qpfq7\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.368504 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-catalog-content\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.368517 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-utilities\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.393922 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfq7\" (UniqueName: \"kubernetes.io/projected/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-kube-api-access-qpfq7\") pod \"community-operators-sw6ch\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.447505 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.910979 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sw6ch"] Jan 26 14:24:54 crc kubenswrapper[4922]: I0126 14:24:54.966451 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw6ch" event={"ID":"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6","Type":"ContainerStarted","Data":"0efdde29e1ea93633fe8c5b44d58a60973208970f21c243fd870e46aa38e2dfc"} Jan 26 14:24:55 crc kubenswrapper[4922]: I0126 14:24:55.973445 4922 generic.go:334] "Generic (PLEG): container finished" podID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerID="521ae4ee95f9ac54b6bed9f8639eb7ad123a2023a525522a00648932f280c5c2" exitCode=0 Jan 26 14:24:55 crc kubenswrapper[4922]: I0126 14:24:55.973501 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw6ch" event={"ID":"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6","Type":"ContainerDied","Data":"521ae4ee95f9ac54b6bed9f8639eb7ad123a2023a525522a00648932f280c5c2"} Jan 26 14:24:57 crc kubenswrapper[4922]: I0126 14:24:57.991625 4922 generic.go:334] "Generic (PLEG): container finished" podID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerID="41544b6f122ee50edac81e2bf536155fb94c694573d903d6bb274004a7675d66" exitCode=0 Jan 26 14:24:57 crc kubenswrapper[4922]: I0126 14:24:57.991738 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw6ch" event={"ID":"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6","Type":"ContainerDied","Data":"41544b6f122ee50edac81e2bf536155fb94c694573d903d6bb274004a7675d66"} Jan 26 14:25:00 crc kubenswrapper[4922]: I0126 14:25:00.006172 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw6ch" event={"ID":"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6","Type":"ContainerStarted","Data":"dc94e5ce3bec721ba9b2998daa94a339067d46718b56b3db0a665f6161662cf7"} Jan 26 14:25:00 crc kubenswrapper[4922]: I0126 14:25:00.025125 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sw6ch" podStartSLOduration=2.99642562 podStartE2EDuration="6.025107602s" podCreationTimestamp="2026-01-26 14:24:54 +0000 UTC" firstStartedPulling="2026-01-26 14:24:55.975841846 +0000 UTC m=+913.178104608" lastFinishedPulling="2026-01-26 14:24:59.004523818 +0000 UTC m=+916.206786590" observedRunningTime="2026-01-26 14:25:00.024396322 +0000 UTC m=+917.226659114" watchObservedRunningTime="2026-01-26 14:25:00.025107602 +0000 UTC m=+917.227370374" Jan 26 14:25:04 crc kubenswrapper[4922]: I0126 14:25:04.447967 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:25:04 crc kubenswrapper[4922]: I0126 14:25:04.448407 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:25:04 crc kubenswrapper[4922]: I0126 14:25:04.503960 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:25:05 crc kubenswrapper[4922]: I0126 14:25:05.100140 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:25:05 crc kubenswrapper[4922]: I0126 14:25:05.142556 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw6ch"] Jan 26 14:25:07 crc kubenswrapper[4922]: I0126 14:25:07.047144 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sw6ch" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="registry-server" containerID="cri-o://dc94e5ce3bec721ba9b2998daa94a339067d46718b56b3db0a665f6161662cf7" gracePeriod=2 Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.059161 4922 generic.go:334] "Generic (PLEG): container finished" podID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerID="dc94e5ce3bec721ba9b2998daa94a339067d46718b56b3db0a665f6161662cf7" exitCode=0 Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.059274 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw6ch" event={"ID":"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6","Type":"ContainerDied","Data":"dc94e5ce3bec721ba9b2998daa94a339067d46718b56b3db0a665f6161662cf7"} Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.217827 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.218803 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.220918 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kxsgb" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.229911 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.251988 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.252883 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.254785 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wpw69" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.265624 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.266733 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.269381 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-472s2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.273330 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwjn\" (UniqueName: \"kubernetes.io/projected/2dc5ea59-1467-4fec-b933-e144ea4fda4a-kube-api-access-wrwjn\") pod \"barbican-operator-controller-manager-7f86f8796f-d4kf8\" (UID: \"2dc5ea59-1467-4fec-b933-e144ea4fda4a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.273399 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9fm\" (UniqueName: \"kubernetes.io/projected/98d7d86a-4bc1-4165-9dc5-3260b879df04-kube-api-access-8h9fm\") pod \"designate-operator-controller-manager-b45d7bf98-sthwh\" (UID: \"98d7d86a-4bc1-4165-9dc5-3260b879df04\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.273429 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tk5q\" (UniqueName: \"kubernetes.io/projected/edd25ba7-355c-48aa-a7f5-0a60df9f1307-kube-api-access-4tk5q\") pod \"cinder-operator-controller-manager-7478f7dbf9-6pjpc\" (UID: \"edd25ba7-355c-48aa-a7f5-0a60df9f1307\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.276051 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.276954 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.279617 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4ncmj" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.287694 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.291855 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.298179 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.299158 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.302983 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-sj47r" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.306227 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.306963 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.311236 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dqsmk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.330023 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.340461 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.363331 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.374145 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrwjn\" (UniqueName: \"kubernetes.io/projected/2dc5ea59-1467-4fec-b933-e144ea4fda4a-kube-api-access-wrwjn\") pod \"barbican-operator-controller-manager-7f86f8796f-d4kf8\" (UID: \"2dc5ea59-1467-4fec-b933-e144ea4fda4a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.374195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zsn\" (UniqueName: \"kubernetes.io/projected/2203be8d-8aa1-4617-8297-c715783969a6-kube-api-access-z5zsn\") pod \"heat-operator-controller-manager-594c8c9d5d-fr5t7\" (UID: \"2203be8d-8aa1-4617-8297-c715783969a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.374225 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9fm\" (UniqueName: \"kubernetes.io/projected/98d7d86a-4bc1-4165-9dc5-3260b879df04-kube-api-access-8h9fm\") pod \"designate-operator-controller-manager-b45d7bf98-sthwh\" (UID: \"98d7d86a-4bc1-4165-9dc5-3260b879df04\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.374255 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tk5q\" (UniqueName: \"kubernetes.io/projected/edd25ba7-355c-48aa-a7f5-0a60df9f1307-kube-api-access-4tk5q\") pod \"cinder-operator-controller-manager-7478f7dbf9-6pjpc\" (UID: \"edd25ba7-355c-48aa-a7f5-0a60df9f1307\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.374281 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xmm\" (UniqueName: \"kubernetes.io/projected/eb93770e-722e-474d-93ef-5767d506fbf5-kube-api-access-f7xmm\") pod \"horizon-operator-controller-manager-77d5c5b54f-grh8g\" (UID: \"eb93770e-722e-474d-93ef-5767d506fbf5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.374305 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q79xz\" (UniqueName: \"kubernetes.io/projected/3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e-kube-api-access-q79xz\") pod \"glance-operator-controller-manager-78fdd796fd-9xxqk\" (UID: \"3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.383295 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.384107 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.389804 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.389960 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-w586m" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.405488 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.406283 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.406508 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tk5q\" (UniqueName: \"kubernetes.io/projected/edd25ba7-355c-48aa-a7f5-0a60df9f1307-kube-api-access-4tk5q\") pod \"cinder-operator-controller-manager-7478f7dbf9-6pjpc\" (UID: \"edd25ba7-355c-48aa-a7f5-0a60df9f1307\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.410132 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.415081 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-27cth" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.420705 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.425667 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9fm\" (UniqueName: \"kubernetes.io/projected/98d7d86a-4bc1-4165-9dc5-3260b879df04-kube-api-access-8h9fm\") pod \"designate-operator-controller-manager-b45d7bf98-sthwh\" (UID: \"98d7d86a-4bc1-4165-9dc5-3260b879df04\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.427789 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.428643 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.428667 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrwjn\" (UniqueName: \"kubernetes.io/projected/2dc5ea59-1467-4fec-b933-e144ea4fda4a-kube-api-access-wrwjn\") pod \"barbican-operator-controller-manager-7f86f8796f-d4kf8\" (UID: \"2dc5ea59-1467-4fec-b933-e144ea4fda4a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.434117 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.435226 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.434383 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pcchh" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.449729 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zdlx5" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.458480 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.464169 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.465025 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.509316 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bgwld" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.513873 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7xmm\" (UniqueName: \"kubernetes.io/projected/eb93770e-722e-474d-93ef-5767d506fbf5-kube-api-access-f7xmm\") pod \"horizon-operator-controller-manager-77d5c5b54f-grh8g\" (UID: \"eb93770e-722e-474d-93ef-5767d506fbf5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.513916 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q79xz\" (UniqueName: \"kubernetes.io/projected/3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e-kube-api-access-q79xz\") pod \"glance-operator-controller-manager-78fdd796fd-9xxqk\" (UID: \"3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.514169 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zsn\" (UniqueName: \"kubernetes.io/projected/2203be8d-8aa1-4617-8297-c715783969a6-kube-api-access-z5zsn\") pod \"heat-operator-controller-manager-594c8c9d5d-fr5t7\" (UID: \"2203be8d-8aa1-4617-8297-c715783969a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.544406 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.548355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zsn\" (UniqueName: \"kubernetes.io/projected/2203be8d-8aa1-4617-8297-c715783969a6-kube-api-access-z5zsn\") pod \"heat-operator-controller-manager-594c8c9d5d-fr5t7\" (UID: \"2203be8d-8aa1-4617-8297-c715783969a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.573670 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q79xz\" (UniqueName: \"kubernetes.io/projected/3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e-kube-api-access-q79xz\") pod \"glance-operator-controller-manager-78fdd796fd-9xxqk\" (UID: \"3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.577652 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.582788 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7xmm\" (UniqueName: \"kubernetes.io/projected/eb93770e-722e-474d-93ef-5767d506fbf5-kube-api-access-f7xmm\") pod \"horizon-operator-controller-manager-77d5c5b54f-grh8g\" (UID: \"eb93770e-722e-474d-93ef-5767d506fbf5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.594362 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.597885 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.604402 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.611676 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.617271 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.618111 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.618629 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.619000 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.620030 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l88v\" (UniqueName: \"kubernetes.io/projected/ae2af37b-8945-48b3-8ed3-c2412b39c897-kube-api-access-2l88v\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.620149 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwhm\" (UniqueName: \"kubernetes.io/projected/3e944bff-02ee-4d1d-948b-350795772f18-kube-api-access-7gwhm\") pod \"ironic-operator-controller-manager-598f7747c9-vbmsc\" (UID: \"3e944bff-02ee-4d1d-948b-350795772f18\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.620171 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvq29\" (UniqueName: \"kubernetes.io/projected/03233631-2567-42a5-af70-861afeefbba3-kube-api-access-dvq29\") pod \"keystone-operator-controller-manager-b8b6d4659-s2hjs\" (UID: \"03233631-2567-42a5-af70-861afeefbba3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.620186 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.620210 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbmt\" (UniqueName: \"kubernetes.io/projected/aa376169-3b34-4289-b339-14fc6f14a0e9-kube-api-access-pvbmt\") pod \"manila-operator-controller-manager-78c6999f6f-lvrq9\" (UID: \"aa376169-3b34-4289-b339-14fc6f14a0e9\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.620238 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqdn\" (UniqueName: \"kubernetes.io/projected/fa8912b6-c04f-4a1e-bb7a-8cae762f00ab-kube-api-access-5fqdn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk\" (UID: \"fa8912b6-c04f-4a1e-bb7a-8cae762f00ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.628365 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7dkv7" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.628620 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8mbg4" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.628724 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.629514 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.640216 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.642181 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.653155 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.653567 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.653588 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.658477 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4m7t5" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.702409 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.704715 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.711714 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.712370 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.712885 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.713538 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-g4vx2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.717487 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kz95l" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726046 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l88v\" (UniqueName: \"kubernetes.io/projected/ae2af37b-8945-48b3-8ed3-c2412b39c897-kube-api-access-2l88v\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726105 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwhm\" (UniqueName: \"kubernetes.io/projected/3e944bff-02ee-4d1d-948b-350795772f18-kube-api-access-7gwhm\") pod \"ironic-operator-controller-manager-598f7747c9-vbmsc\" (UID: \"3e944bff-02ee-4d1d-948b-350795772f18\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726128 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rxx\" (UniqueName: \"kubernetes.io/projected/f3aabab5-bdde-4359-b011-5887666ee21a-kube-api-access-v7rxx\") pod \"neutron-operator-controller-manager-78d58447c5-n9bp2\" (UID: \"f3aabab5-bdde-4359-b011-5887666ee21a\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726146 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726162 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvq29\" (UniqueName: \"kubernetes.io/projected/03233631-2567-42a5-af70-861afeefbba3-kube-api-access-dvq29\") pod \"keystone-operator-controller-manager-b8b6d4659-s2hjs\" (UID: \"03233631-2567-42a5-af70-861afeefbba3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726180 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4972\" (UniqueName: \"kubernetes.io/projected/94e756c6-328c-4065-9d81-2cd1f5293a0a-kube-api-access-q4972\") pod \"octavia-operator-controller-manager-5f4cd88d46-npcs6\" (UID: \"94e756c6-328c-4065-9d81-2cd1f5293a0a\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726205 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbmt\" (UniqueName: \"kubernetes.io/projected/aa376169-3b34-4289-b339-14fc6f14a0e9-kube-api-access-pvbmt\") pod \"manila-operator-controller-manager-78c6999f6f-lvrq9\" (UID: \"aa376169-3b34-4289-b339-14fc6f14a0e9\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726222 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726247 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lhd\" (UniqueName: \"kubernetes.io/projected/3ed4f8f8-86dd-4331-b60d-ac713fe8be31-kube-api-access-v6lhd\") pod \"nova-operator-controller-manager-7bdb645866-vg2c5\" (UID: \"3ed4f8f8-86dd-4331-b60d-ac713fe8be31\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726266 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwzb7\" (UniqueName: \"kubernetes.io/projected/e741d752-bf89-4fc2-a173-98a5e6257ffc-kube-api-access-kwzb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726287 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wnfp\" (UniqueName: \"kubernetes.io/projected/f3a5936d-5620-4b92-92ef-71b8387e019e-kube-api-access-2wnfp\") pod \"ovn-operator-controller-manager-6f75f45d54-7w6r2\" (UID: \"f3a5936d-5620-4b92-92ef-71b8387e019e\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.726304 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqdn\" (UniqueName: \"kubernetes.io/projected/fa8912b6-c04f-4a1e-bb7a-8cae762f00ab-kube-api-access-5fqdn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk\" (UID: \"fa8912b6-c04f-4a1e-bb7a-8cae762f00ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:08 crc kubenswrapper[4922]: E0126 14:25:08.726821 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:08 crc kubenswrapper[4922]: E0126 14:25:08.726858 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert podName:ae2af37b-8945-48b3-8ed3-c2412b39c897 nodeName:}" failed. No retries permitted until 2026-01-26 14:25:09.226845052 +0000 UTC m=+926.429107824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert") pod "infra-operator-controller-manager-694cf4f878-xwz7c" (UID: "ae2af37b-8945-48b3-8ed3-c2412b39c897") : secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.727207 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.727887 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.740004 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4r4qs" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.740169 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.781301 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbmt\" (UniqueName: \"kubernetes.io/projected/aa376169-3b34-4289-b339-14fc6f14a0e9-kube-api-access-pvbmt\") pod \"manila-operator-controller-manager-78c6999f6f-lvrq9\" (UID: \"aa376169-3b34-4289-b339-14fc6f14a0e9\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.782371 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.783912 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.784726 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.789654 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwhm\" (UniqueName: \"kubernetes.io/projected/3e944bff-02ee-4d1d-948b-350795772f18-kube-api-access-7gwhm\") pod \"ironic-operator-controller-manager-598f7747c9-vbmsc\" (UID: \"3e944bff-02ee-4d1d-948b-350795772f18\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.789955 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-474w6" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.790557 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqdn\" (UniqueName: \"kubernetes.io/projected/fa8912b6-c04f-4a1e-bb7a-8cae762f00ab-kube-api-access-5fqdn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk\" (UID: \"fa8912b6-c04f-4a1e-bb7a-8cae762f00ab\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.792055 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l88v\" (UniqueName: \"kubernetes.io/projected/ae2af37b-8945-48b3-8ed3-c2412b39c897-kube-api-access-2l88v\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.801844 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvq29\" (UniqueName: \"kubernetes.io/projected/03233631-2567-42a5-af70-861afeefbba3-kube-api-access-dvq29\") pod \"keystone-operator-controller-manager-b8b6d4659-s2hjs\" (UID: \"03233631-2567-42a5-af70-861afeefbba3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.807152 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.825404 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826627 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzw5p\" (UniqueName: \"kubernetes.io/projected/d487712f-146f-4342-a84e-6dca10b381fe-kube-api-access-vzw5p\") pod \"placement-operator-controller-manager-79d5ccc684-2mvbc\" (UID: \"d487712f-146f-4342-a84e-6dca10b381fe\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826679 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7rxx\" (UniqueName: \"kubernetes.io/projected/f3aabab5-bdde-4359-b011-5887666ee21a-kube-api-access-v7rxx\") pod \"neutron-operator-controller-manager-78d58447c5-n9bp2\" (UID: \"f3aabab5-bdde-4359-b011-5887666ee21a\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826709 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4972\" (UniqueName: \"kubernetes.io/projected/94e756c6-328c-4065-9d81-2cd1f5293a0a-kube-api-access-q4972\") pod \"octavia-operator-controller-manager-5f4cd88d46-npcs6\" (UID: \"94e756c6-328c-4065-9d81-2cd1f5293a0a\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826733 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826750 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lhd\" (UniqueName: \"kubernetes.io/projected/3ed4f8f8-86dd-4331-b60d-ac713fe8be31-kube-api-access-v6lhd\") pod \"nova-operator-controller-manager-7bdb645866-vg2c5\" (UID: \"3ed4f8f8-86dd-4331-b60d-ac713fe8be31\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826769 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjdp\" (UniqueName: \"kubernetes.io/projected/3732fc65-c182-42c3-9a98-b9aff1d49a1d-kube-api-access-tbjdp\") pod \"telemetry-operator-controller-manager-85cd9769bb-wpq74\" (UID: \"3732fc65-c182-42c3-9a98-b9aff1d49a1d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826788 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwzb7\" (UniqueName: \"kubernetes.io/projected/e741d752-bf89-4fc2-a173-98a5e6257ffc-kube-api-access-kwzb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.826807 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnfp\" (UniqueName: \"kubernetes.io/projected/f3a5936d-5620-4b92-92ef-71b8387e019e-kube-api-access-2wnfp\") pod \"ovn-operator-controller-manager-6f75f45d54-7w6r2\" (UID: \"f3a5936d-5620-4b92-92ef-71b8387e019e\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:25:08 crc kubenswrapper[4922]: E0126 14:25:08.827336 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:08 crc kubenswrapper[4922]: E0126 14:25:08.827371 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert podName:e741d752-bf89-4fc2-a173-98a5e6257ffc nodeName:}" failed. No retries permitted until 2026-01-26 14:25:09.327358093 +0000 UTC m=+926.529620865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" (UID: "e741d752-bf89-4fc2-a173-98a5e6257ffc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.849912 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4972\" (UniqueName: \"kubernetes.io/projected/94e756c6-328c-4065-9d81-2cd1f5293a0a-kube-api-access-q4972\") pod \"octavia-operator-controller-manager-5f4cd88d46-npcs6\" (UID: \"94e756c6-328c-4065-9d81-2cd1f5293a0a\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.863343 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.874717 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lhd\" (UniqueName: \"kubernetes.io/projected/3ed4f8f8-86dd-4331-b60d-ac713fe8be31-kube-api-access-v6lhd\") pod \"nova-operator-controller-manager-7bdb645866-vg2c5\" (UID: \"3ed4f8f8-86dd-4331-b60d-ac713fe8be31\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.877478 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnfp\" (UniqueName: \"kubernetes.io/projected/f3a5936d-5620-4b92-92ef-71b8387e019e-kube-api-access-2wnfp\") pod \"ovn-operator-controller-manager-6f75f45d54-7w6r2\" (UID: \"f3a5936d-5620-4b92-92ef-71b8387e019e\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.877903 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7rxx\" (UniqueName: \"kubernetes.io/projected/f3aabab5-bdde-4359-b011-5887666ee21a-kube-api-access-v7rxx\") pod \"neutron-operator-controller-manager-78d58447c5-n9bp2\" (UID: \"f3aabab5-bdde-4359-b011-5887666ee21a\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.881997 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwzb7\" (UniqueName: \"kubernetes.io/projected/e741d752-bf89-4fc2-a173-98a5e6257ffc-kube-api-access-kwzb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.903616 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.903978 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.904774 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.909294 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.918790 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-75vt2" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.929719 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjdp\" (UniqueName: \"kubernetes.io/projected/3732fc65-c182-42c3-9a98-b9aff1d49a1d-kube-api-access-tbjdp\") pod \"telemetry-operator-controller-manager-85cd9769bb-wpq74\" (UID: \"3732fc65-c182-42c3-9a98-b9aff1d49a1d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.929801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzw5p\" (UniqueName: \"kubernetes.io/projected/d487712f-146f-4342-a84e-6dca10b381fe-kube-api-access-vzw5p\") pod \"placement-operator-controller-manager-79d5ccc684-2mvbc\" (UID: \"d487712f-146f-4342-a84e-6dca10b381fe\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.939431 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.949797 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.975591 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjdp\" (UniqueName: \"kubernetes.io/projected/3732fc65-c182-42c3-9a98-b9aff1d49a1d-kube-api-access-tbjdp\") pod \"telemetry-operator-controller-manager-85cd9769bb-wpq74\" (UID: \"3732fc65-c182-42c3-9a98-b9aff1d49a1d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.977202 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzw5p\" (UniqueName: \"kubernetes.io/projected/d487712f-146f-4342-a84e-6dca10b381fe-kube-api-access-vzw5p\") pod \"placement-operator-controller-manager-79d5ccc684-2mvbc\" (UID: \"d487712f-146f-4342-a84e-6dca10b381fe\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.993491 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x"] Jan 26 14:25:08 crc kubenswrapper[4922]: I0126 14:25:08.994408 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:08.998361 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:08.999085 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.002484 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-44g4p" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.014376 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.033199 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjkz\" (UniqueName: \"kubernetes.io/projected/9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143-kube-api-access-kcjkz\") pod \"swift-operator-controller-manager-547cbdb99f-h4zkz\" (UID: \"9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.047483 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.057400 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.059984 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.067041 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wtssc" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.092018 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.109591 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.128091 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.128701 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.138378 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjkz\" (UniqueName: \"kubernetes.io/projected/9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143-kube-api-access-kcjkz\") pod \"swift-operator-controller-manager-547cbdb99f-h4zkz\" (UID: \"9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.138464 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvdpx\" (UniqueName: \"kubernetes.io/projected/542bee92-421c-4969-9fb8-da684d74ab1d-kube-api-access-nvdpx\") pod \"test-operator-controller-manager-69797bbcbd-8tk4x\" (UID: \"542bee92-421c-4969-9fb8-da684d74ab1d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.159655 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjkz\" (UniqueName: \"kubernetes.io/projected/9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143-kube-api-access-kcjkz\") pod \"swift-operator-controller-manager-547cbdb99f-h4zkz\" (UID: \"9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.159726 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b6496445-44795"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.185729 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b6496445-44795"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.185840 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.189834 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gs7wr" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.190207 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.190629 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.203610 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.204755 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.208113 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.214054 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jqs5f" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.245650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.245690 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498mz\" (UniqueName: \"kubernetes.io/projected/033a8dae-299b-49cc-a63e-2d4bf250488c-kube-api-access-498mz\") pod \"watcher-operator-controller-manager-75d4cf59bb-dctt2\" (UID: \"033a8dae-299b-49cc-a63e-2d4bf250488c\") " pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.245757 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.245794 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.245812 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvdpx\" (UniqueName: \"kubernetes.io/projected/542bee92-421c-4969-9fb8-da684d74ab1d-kube-api-access-nvdpx\") pod \"test-operator-controller-manager-69797bbcbd-8tk4x\" (UID: \"542bee92-421c-4969-9fb8-da684d74ab1d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.245839 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q26s\" (UniqueName: \"kubernetes.io/projected/2de91e12-3fbb-48e3-ac0f-55d98628405e-kube-api-access-4q26s\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.246404 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.246449 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert podName:ae2af37b-8945-48b3-8ed3-c2412b39c897 nodeName:}" failed. No retries permitted until 2026-01-26 14:25:10.246435261 +0000 UTC m=+927.448698033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert") pod "infra-operator-controller-manager-694cf4f878-xwz7c" (UID: "ae2af37b-8945-48b3-8ed3-c2412b39c897") : secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.266897 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.291100 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvdpx\" (UniqueName: \"kubernetes.io/projected/542bee92-421c-4969-9fb8-da684d74ab1d-kube-api-access-nvdpx\") pod \"test-operator-controller-manager-69797bbcbd-8tk4x\" (UID: \"542bee92-421c-4969-9fb8-da684d74ab1d\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.297516 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.335514 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.346964 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.347019 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.347129 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.347183 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:09.847165969 +0000 UTC m=+927.049428741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "metrics-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.347369 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9fr9\" (UniqueName: \"kubernetes.io/projected/f78795e3-4b41-43ec-b56d-37745dd146cd-kube-api-access-r9fr9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fwdn9\" (UID: \"f78795e3-4b41-43ec-b56d-37745dd146cd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.347411 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.347432 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q26s\" (UniqueName: \"kubernetes.io/projected/2de91e12-3fbb-48e3-ac0f-55d98628405e-kube-api-access-4q26s\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.347492 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498mz\" (UniqueName: \"kubernetes.io/projected/033a8dae-299b-49cc-a63e-2d4bf250488c-kube-api-access-498mz\") pod \"watcher-operator-controller-manager-75d4cf59bb-dctt2\" (UID: \"033a8dae-299b-49cc-a63e-2d4bf250488c\") " pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.347529 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.347676 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.347727 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert podName:e741d752-bf89-4fc2-a173-98a5e6257ffc nodeName:}" failed. No retries permitted until 2026-01-26 14:25:10.347713094 +0000 UTC m=+927.549975866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" (UID: "e741d752-bf89-4fc2-a173-98a5e6257ffc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.348426 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8"] Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.349455 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:09.849419153 +0000 UTC m=+927.051681945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.366921 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498mz\" (UniqueName: \"kubernetes.io/projected/033a8dae-299b-49cc-a63e-2d4bf250488c-kube-api-access-498mz\") pod \"watcher-operator-controller-manager-75d4cf59bb-dctt2\" (UID: \"033a8dae-299b-49cc-a63e-2d4bf250488c\") " pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.395409 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.396190 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q26s\" (UniqueName: \"kubernetes.io/projected/2de91e12-3fbb-48e3-ac0f-55d98628405e-kube-api-access-4q26s\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.451319 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-utilities\") pod \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.451413 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-catalog-content\") pod \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.451485 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpfq7\" (UniqueName: \"kubernetes.io/projected/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-kube-api-access-qpfq7\") pod \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\" (UID: \"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6\") " Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.451627 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9fr9\" (UniqueName: \"kubernetes.io/projected/f78795e3-4b41-43ec-b56d-37745dd146cd-kube-api-access-r9fr9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fwdn9\" (UID: \"f78795e3-4b41-43ec-b56d-37745dd146cd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.454546 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-utilities" (OuterVolumeSpecName: "utilities") pod "f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" (UID: "f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.473411 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-kube-api-access-qpfq7" (OuterVolumeSpecName: "kube-api-access-qpfq7") pod "f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" (UID: "f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6"). InnerVolumeSpecName "kube-api-access-qpfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.490602 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9fr9\" (UniqueName: \"kubernetes.io/projected/f78795e3-4b41-43ec-b56d-37745dd146cd-kube-api-access-r9fr9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fwdn9\" (UID: \"f78795e3-4b41-43ec-b56d-37745dd146cd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.505157 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.537895 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.552519 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpfq7\" (UniqueName: \"kubernetes.io/projected/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-kube-api-access-qpfq7\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.552543 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.591395 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.608496 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.624390 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" (UID: "f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:09 crc kubenswrapper[4922]: W0126 14:25:09.629870 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb93770e_722e_474d_93ef_5767d506fbf5.slice/crio-7d385f3871c0f0d7904e48e9cae115ee9670062a6210afdc29f75ced7c54fc01 WatchSource:0}: Error finding container 7d385f3871c0f0d7904e48e9cae115ee9670062a6210afdc29f75ced7c54fc01: Status 404 returned error can't find the container with id 7d385f3871c0f0d7904e48e9cae115ee9670062a6210afdc29f75ced7c54fc01 Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.653956 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.767037 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.791236 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.801828 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7"] Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.861116 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.861196 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.861304 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.861366 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:10.861350086 +0000 UTC m=+928.063612858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "webhook-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.861415 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: E0126 14:25:09.861488 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:10.861456169 +0000 UTC m=+928.063718941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "metrics-server-cert" not found Jan 26 14:25:09 crc kubenswrapper[4922]: I0126 14:25:09.902310 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.082901 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.091764 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.097645 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.101132 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" event={"ID":"f3aabab5-bdde-4359-b011-5887666ee21a","Type":"ContainerStarted","Data":"a8cc198869eaa0819cd6bb6f9eb1389613f22ef0e8ad2745bbf9a9c9aac96898"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.102568 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.107580 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" event={"ID":"98d7d86a-4bc1-4165-9dc5-3260b879df04","Type":"ContainerStarted","Data":"26f704d837264f040c8b214ae8960e4736f0ba9c33c7455d3bca4f8b88e38214"} Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.107631 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wnfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-7w6r2_openstack-operators(f3a5936d-5620-4b92-92ef-71b8387e019e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.108970 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" podUID="f3a5936d-5620-4b92-92ef-71b8387e019e" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.109457 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" event={"ID":"3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e","Type":"ContainerStarted","Data":"014648e732d54e19a241e683c933ddbd6842c8543dd02e2a097775ef169e4695"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.110774 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" event={"ID":"3ed4f8f8-86dd-4331-b60d-ac713fe8be31","Type":"ContainerStarted","Data":"8e13d20b76d5491d240f0e38eef02e0cd10ad4f63f7f570ec27f31a0eb768153"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.111725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" event={"ID":"eb93770e-722e-474d-93ef-5767d506fbf5","Type":"ContainerStarted","Data":"7d385f3871c0f0d7904e48e9cae115ee9670062a6210afdc29f75ced7c54fc01"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.114571 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.115212 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sw6ch" event={"ID":"f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6","Type":"ContainerDied","Data":"0efdde29e1ea93633fe8c5b44d58a60973208970f21c243fd870e46aa38e2dfc"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.115264 4922 scope.go:117] "RemoveContainer" containerID="dc94e5ce3bec721ba9b2998daa94a339067d46718b56b3db0a665f6161662cf7" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.115417 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sw6ch" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.119326 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" event={"ID":"edd25ba7-355c-48aa-a7f5-0a60df9f1307","Type":"ContainerStarted","Data":"47b6d4588f8176be66c68b39929e4193b859b791e0528cb678dc6624e4ae7d7f"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.120848 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.121113 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" event={"ID":"2dc5ea59-1467-4fec-b933-e144ea4fda4a","Type":"ContainerStarted","Data":"2cc46b2d5d891cc6e576c28303db20a0156864c6522fe38eb565929e41a6be74"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.121950 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" event={"ID":"03233631-2567-42a5-af70-861afeefbba3","Type":"ContainerStarted","Data":"80f2004948832496c95a3a1f381d6a324dd2ab57a2ec0de31a398af633367dad"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.126097 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" event={"ID":"fa8912b6-c04f-4a1e-bb7a-8cae762f00ab","Type":"ContainerStarted","Data":"4f90e43c807c49faecc4cfb174d715685f606c94679dc2525951907b7b08975d"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.127120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" event={"ID":"94e756c6-328c-4065-9d81-2cd1f5293a0a","Type":"ContainerStarted","Data":"cdd00e800f43e48b7992f6fec00b9f4633b5a48875b1778f4991085cc6524a29"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.130969 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" event={"ID":"2203be8d-8aa1-4617-8297-c715783969a6","Type":"ContainerStarted","Data":"509fe43b9355f2d6829aebf1a275d7c83d0ea999d3f42bf4443642fe38e73f1f"} Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.151291 4922 scope.go:117] "RemoveContainer" containerID="41544b6f122ee50edac81e2bf536155fb94c694573d903d6bb274004a7675d66" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.163135 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sw6ch"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.165672 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sw6ch"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.198288 4922 scope.go:117] "RemoveContainer" containerID="521ae4ee95f9ac54b6bed9f8639eb7ad123a2023a525522a00648932f280c5c2" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.267385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.267733 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.267822 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert podName:ae2af37b-8945-48b3-8ed3-c2412b39c897 nodeName:}" failed. No retries permitted until 2026-01-26 14:25:12.267801198 +0000 UTC m=+929.470063970 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert") pod "infra-operator-controller-manager-694cf4f878-xwz7c" (UID: "ae2af37b-8945-48b3-8ed3-c2412b39c897") : secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.289699 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74"] Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.306910 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbjdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-wpq74_openstack-operators(3732fc65-c182-42c3-9a98-b9aff1d49a1d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.308202 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" podUID="3732fc65-c182-42c3-9a98-b9aff1d49a1d" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.376193 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.376540 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.376614 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert podName:e741d752-bf89-4fc2-a173-98a5e6257ffc nodeName:}" failed. No retries permitted until 2026-01-26 14:25:12.376591792 +0000 UTC m=+929.578854564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" (UID: "e741d752-bf89-4fc2-a173-98a5e6257ffc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.445489 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.452165 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9"] Jan 26 14:25:10 crc kubenswrapper[4922]: W0126 14:25:10.467462 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78795e3_4b41_43ec_b56d_37745dd146cd.slice/crio-7bf863bbd691e1123612da587fa4b4876a73f9313ecc9f256d7e2fa11899c9c4 WatchSource:0}: Error finding container 7bf863bbd691e1123612da587fa4b4876a73f9313ecc9f256d7e2fa11899c9c4: Status 404 returned error can't find the container with id 7bf863bbd691e1123612da587fa4b4876a73f9313ecc9f256d7e2fa11899c9c4 Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.474599 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.230:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-498mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75d4cf59bb-dctt2_openstack-operators(033a8dae-299b-49cc-a63e-2d4bf250488c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.478251 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" podUID="033a8dae-299b-49cc-a63e-2d4bf250488c" Jan 26 14:25:10 crc kubenswrapper[4922]: W0126 14:25:10.486468 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa376169_3b34_4289_b339_14fc6f14a0e9.slice/crio-df2563bfe5123a3dd2744ac059d0fbff14b94319b5abf97247792996552330bb WatchSource:0}: Error finding container df2563bfe5123a3dd2744ac059d0fbff14b94319b5abf97247792996552330bb: Status 404 returned error can't find the container with id df2563bfe5123a3dd2744ac059d0fbff14b94319b5abf97247792996552330bb Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.489094 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.528009 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc"] Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.528249 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x"] Jan 26 14:25:10 crc kubenswrapper[4922]: W0126 14:25:10.533919 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd487712f_146f_4342_a84e_6dca10b381fe.slice/crio-46ef802c41276623f06bc83ca1f385045099709fd42ab8a1650e921746e3eefa WatchSource:0}: Error finding container 46ef802c41276623f06bc83ca1f385045099709fd42ab8a1650e921746e3eefa: Status 404 returned error can't find the container with id 46ef802c41276623f06bc83ca1f385045099709fd42ab8a1650e921746e3eefa Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.538339 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvbmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-lvrq9_openstack-operators(aa376169-3b34-4289-b339-14fc6f14a0e9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.540016 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" podUID="aa376169-3b34-4289-b339-14fc6f14a0e9" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.546193 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzw5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-2mvbc_openstack-operators(d487712f-146f-4342-a84e-6dca10b381fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.549564 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" podUID="d487712f-146f-4342-a84e-6dca10b381fe" Jan 26 14:25:10 crc kubenswrapper[4922]: W0126 14:25:10.551721 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod542bee92_421c_4969_9fb8_da684d74ab1d.slice/crio-382ffd220812674f97266fa95f2e708de26794d72e8e2101c01fe77db3a4aa52 WatchSource:0}: Error finding container 382ffd220812674f97266fa95f2e708de26794d72e8e2101c01fe77db3a4aa52: Status 404 returned error can't find the container with id 382ffd220812674f97266fa95f2e708de26794d72e8e2101c01fe77db3a4aa52 Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.886808 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.886868 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.887049 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.887115 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:12.887101796 +0000 UTC m=+930.089364568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "metrics-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.887477 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.887501 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:12.887493777 +0000 UTC m=+930.089756549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "webhook-server-cert" not found Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.963954 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gcrjk"] Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.964260 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="extract-utilities" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.964276 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="extract-utilities" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.964287 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="registry-server" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.964294 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="registry-server" Jan 26 14:25:10 crc kubenswrapper[4922]: E0126 14:25:10.964303 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="extract-content" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.964309 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="extract-content" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.964439 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" containerName="registry-server" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.965374 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:10 crc kubenswrapper[4922]: I0126 14:25:10.987608 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gcrjk"] Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.089535 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-catalog-content\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.089837 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdwn\" (UniqueName: \"kubernetes.io/projected/2153c0a7-1535-4cff-a812-8a380c6cae80-kube-api-access-2wdwn\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.089880 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-utilities\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.106979 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6" path="/var/lib/kubelet/pods/f48f2c2c-8701-46f1-9e1f-5e18c8ca34c6/volumes" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.153550 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" event={"ID":"3732fc65-c182-42c3-9a98-b9aff1d49a1d","Type":"ContainerStarted","Data":"8ee44820268b148bc6cf9aca0335c23d7a4a08e74497799a3cc53c7756b64385"} Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.155427 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" event={"ID":"d487712f-146f-4342-a84e-6dca10b381fe","Type":"ContainerStarted","Data":"46ef802c41276623f06bc83ca1f385045099709fd42ab8a1650e921746e3eefa"} Jan 26 14:25:11 crc kubenswrapper[4922]: E0126 14:25:11.155830 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" podUID="3732fc65-c182-42c3-9a98-b9aff1d49a1d" Jan 26 14:25:11 crc kubenswrapper[4922]: E0126 14:25:11.156595 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" podUID="d487712f-146f-4342-a84e-6dca10b381fe" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.156959 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" event={"ID":"542bee92-421c-4969-9fb8-da684d74ab1d","Type":"ContainerStarted","Data":"382ffd220812674f97266fa95f2e708de26794d72e8e2101c01fe77db3a4aa52"} Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.160446 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" event={"ID":"9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143","Type":"ContainerStarted","Data":"341b4ffb759dc7a0dbde75dc47b56203d01d92d3724b2bdd2a922fd54d417219"} Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.177521 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" event={"ID":"033a8dae-299b-49cc-a63e-2d4bf250488c","Type":"ContainerStarted","Data":"359e4fe69825e0b53688b3ac1b26a438a9d5e58544862d559792d95a4b3df2f7"} Jan 26 14:25:11 crc kubenswrapper[4922]: E0126 14:25:11.181894 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" podUID="033a8dae-299b-49cc-a63e-2d4bf250488c" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.190416 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" event={"ID":"f3a5936d-5620-4b92-92ef-71b8387e019e","Type":"ContainerStarted","Data":"4401499b84752f92dc3af83804be42768188aae200be0c6db2f054429a109073"} Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.191397 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-utilities\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.191425 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdwn\" (UniqueName: \"kubernetes.io/projected/2153c0a7-1535-4cff-a812-8a380c6cae80-kube-api-access-2wdwn\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.191457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-catalog-content\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.191852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-catalog-content\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.191883 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-utilities\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: E0126 14:25:11.192558 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" podUID="f3a5936d-5620-4b92-92ef-71b8387e019e" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.196688 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" event={"ID":"3e944bff-02ee-4d1d-948b-350795772f18","Type":"ContainerStarted","Data":"5bca200b78cafdfccd7f6072cf08b3d492e94a7dfe1161b71831cb2cbb7a3972"} Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.220149 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" event={"ID":"aa376169-3b34-4289-b339-14fc6f14a0e9","Type":"ContainerStarted","Data":"df2563bfe5123a3dd2744ac059d0fbff14b94319b5abf97247792996552330bb"} Jan 26 14:25:11 crc kubenswrapper[4922]: E0126 14:25:11.221672 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" podUID="aa376169-3b34-4289-b339-14fc6f14a0e9" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.222165 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" event={"ID":"f78795e3-4b41-43ec-b56d-37745dd146cd","Type":"ContainerStarted","Data":"7bf863bbd691e1123612da587fa4b4876a73f9313ecc9f256d7e2fa11899c9c4"} Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.225487 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdwn\" (UniqueName: \"kubernetes.io/projected/2153c0a7-1535-4cff-a812-8a380c6cae80-kube-api-access-2wdwn\") pod \"certified-operators-gcrjk\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.303334 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.308002 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.308043 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:25:11 crc kubenswrapper[4922]: I0126 14:25:11.863469 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gcrjk"] Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.239496 4922 generic.go:334] "Generic (PLEG): container finished" podID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerID="a7c0d2c1f121efdeb5360730bc4719dd1402b888f90dd3afd5b8940336c4930c" exitCode=0 Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.240524 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerDied","Data":"a7c0d2c1f121efdeb5360730bc4719dd1402b888f90dd3afd5b8940336c4930c"} Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.240549 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerStarted","Data":"de2f62979b659c39a0acfad814aa1c30d51bad065d2e369b3fa731cc276728ec"} Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.251463 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" podUID="d487712f-146f-4342-a84e-6dca10b381fe" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.251834 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" podUID="aa376169-3b34-4289-b339-14fc6f14a0e9" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.251878 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" podUID="3732fc65-c182-42c3-9a98-b9aff1d49a1d" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.252046 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" podUID="033a8dae-299b-49cc-a63e-2d4bf250488c" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.252194 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" podUID="f3a5936d-5620-4b92-92ef-71b8387e019e" Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.320527 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.320811 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.320863 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert podName:ae2af37b-8945-48b3-8ed3-c2412b39c897 nodeName:}" failed. No retries permitted until 2026-01-26 14:25:16.320847102 +0000 UTC m=+933.523109874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert") pod "infra-operator-controller-manager-694cf4f878-xwz7c" (UID: "ae2af37b-8945-48b3-8ed3-c2412b39c897") : secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.422771 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.423738 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.423775 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert podName:e741d752-bf89-4fc2-a173-98a5e6257ffc nodeName:}" failed. No retries permitted until 2026-01-26 14:25:16.423762451 +0000 UTC m=+933.626025213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" (UID: "e741d752-bf89-4fc2-a173-98a5e6257ffc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.934747 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:12 crc kubenswrapper[4922]: I0126 14:25:12.934832 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.935096 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.935143 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:16.935128349 +0000 UTC m=+934.137391121 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "metrics-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.935460 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 14:25:12 crc kubenswrapper[4922]: E0126 14:25:12.935484 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:16.935477278 +0000 UTC m=+934.137740050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "webhook-server-cert" not found Jan 26 14:25:16 crc kubenswrapper[4922]: I0126 14:25:16.391270 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:16 crc kubenswrapper[4922]: E0126 14:25:16.391434 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:16 crc kubenswrapper[4922]: E0126 14:25:16.392113 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert podName:ae2af37b-8945-48b3-8ed3-c2412b39c897 nodeName:}" failed. No retries permitted until 2026-01-26 14:25:24.392095608 +0000 UTC m=+941.594358380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert") pod "infra-operator-controller-manager-694cf4f878-xwz7c" (UID: "ae2af37b-8945-48b3-8ed3-c2412b39c897") : secret "infra-operator-webhook-server-cert" not found Jan 26 14:25:16 crc kubenswrapper[4922]: I0126 14:25:16.493591 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:16 crc kubenswrapper[4922]: E0126 14:25:16.493814 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:16 crc kubenswrapper[4922]: E0126 14:25:16.493895 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert podName:e741d752-bf89-4fc2-a173-98a5e6257ffc nodeName:}" failed. No retries permitted until 2026-01-26 14:25:24.493875736 +0000 UTC m=+941.696138518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" (UID: "e741d752-bf89-4fc2-a173-98a5e6257ffc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 14:25:17 crc kubenswrapper[4922]: I0126 14:25:17.001764 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:17 crc kubenswrapper[4922]: I0126 14:25:17.001836 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:17 crc kubenswrapper[4922]: E0126 14:25:17.002001 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 14:25:17 crc kubenswrapper[4922]: E0126 14:25:17.002086 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:25.002044294 +0000 UTC m=+942.204307066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "metrics-server-cert" not found Jan 26 14:25:17 crc kubenswrapper[4922]: E0126 14:25:17.002460 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 14:25:17 crc kubenswrapper[4922]: E0126 14:25:17.002503 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs podName:2de91e12-3fbb-48e3-ac0f-55d98628405e nodeName:}" failed. No retries permitted until 2026-01-26 14:25:25.002490546 +0000 UTC m=+942.204753328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs") pod "openstack-operator-controller-manager-5b6496445-44795" (UID: "2de91e12-3fbb-48e3-ac0f-55d98628405e") : secret "webhook-server-cert" not found Jan 26 14:25:23 crc kubenswrapper[4922]: E0126 14:25:23.262682 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 14:25:23 crc kubenswrapper[4922]: E0126 14:25:23.263437 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7xmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-grh8g_openstack-operators(eb93770e-722e-474d-93ef-5767d506fbf5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:23 crc kubenswrapper[4922]: E0126 14:25:23.264669 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" podUID="eb93770e-722e-474d-93ef-5767d506fbf5" Jan 26 14:25:23 crc kubenswrapper[4922]: E0126 14:25:23.339089 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" podUID="eb93770e-722e-474d-93ef-5767d506fbf5" Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.071231 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.071483 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tk5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-6pjpc_openstack-operators(edd25ba7-355c-48aa-a7f5-0a60df9f1307): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.072900 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" podUID="edd25ba7-355c-48aa-a7f5-0a60df9f1307" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.094431 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.347313 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" podUID="edd25ba7-355c-48aa-a7f5-0a60df9f1307" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.449254 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.463717 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae2af37b-8945-48b3-8ed3-c2412b39c897-cert\") pod \"infra-operator-controller-manager-694cf4f878-xwz7c\" (UID: \"ae2af37b-8945-48b3-8ed3-c2412b39c897\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.550949 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.555227 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e741d752-bf89-4fc2-a173-98a5e6257ffc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x\" (UID: \"e741d752-bf89-4fc2-a173-98a5e6257ffc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.681260 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:24 crc kubenswrapper[4922]: I0126 14:25:24.682974 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.869223 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.869457 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fqdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk_openstack-operators(fa8912b6-c04f-4a1e-bb7a-8cae762f00ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:24 crc kubenswrapper[4922]: E0126 14:25:24.870644 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" podUID="fa8912b6-c04f-4a1e-bb7a-8cae762f00ab" Jan 26 14:25:25 crc kubenswrapper[4922]: I0126 14:25:25.057477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:25 crc kubenswrapper[4922]: I0126 14:25:25.057534 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:25 crc kubenswrapper[4922]: I0126 14:25:25.062337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-webhook-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:25 crc kubenswrapper[4922]: I0126 14:25:25.062673 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2de91e12-3fbb-48e3-ac0f-55d98628405e-metrics-certs\") pod \"openstack-operator-controller-manager-5b6496445-44795\" (UID: \"2de91e12-3fbb-48e3-ac0f-55d98628405e\") " pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:25 crc kubenswrapper[4922]: I0126 14:25:25.121952 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:25 crc kubenswrapper[4922]: E0126 14:25:25.353199 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" podUID="fa8912b6-c04f-4a1e-bb7a-8cae762f00ab" Jan 26 14:25:25 crc kubenswrapper[4922]: E0126 14:25:25.521337 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 14:25:25 crc kubenswrapper[4922]: E0126 14:25:25.521586 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5zsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-fr5t7_openstack-operators(2203be8d-8aa1-4617-8297-c715783969a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:25 crc kubenswrapper[4922]: E0126 14:25:25.522976 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" podUID="2203be8d-8aa1-4617-8297-c715783969a6" Jan 26 14:25:26 crc kubenswrapper[4922]: E0126 14:25:26.364891 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" podUID="2203be8d-8aa1-4617-8297-c715783969a6" Jan 26 14:25:35 crc kubenswrapper[4922]: E0126 14:25:35.420296 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 14:25:35 crc kubenswrapper[4922]: E0126 14:25:35.420831 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4972,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-npcs6_openstack-operators(94e756c6-328c-4065-9d81-2cd1f5293a0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:35 crc kubenswrapper[4922]: E0126 14:25:35.422021 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" podUID="94e756c6-328c-4065-9d81-2cd1f5293a0a" Jan 26 14:25:35 crc kubenswrapper[4922]: E0126 14:25:35.460381 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" podUID="94e756c6-328c-4065-9d81-2cd1f5293a0a" Jan 26 14:25:36 crc kubenswrapper[4922]: E0126 14:25:36.604127 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 14:25:36 crc kubenswrapper[4922]: E0126 14:25:36.604411 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvq29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-s2hjs_openstack-operators(03233631-2567-42a5-af70-861afeefbba3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:36 crc kubenswrapper[4922]: E0126 14:25:36.605576 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" podUID="03233631-2567-42a5-af70-861afeefbba3" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.148461 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.148938 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6lhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-vg2c5_openstack-operators(3ed4f8f8-86dd-4331-b60d-ac713fe8be31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.150234 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" podUID="3ed4f8f8-86dd-4331-b60d-ac713fe8be31" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.489539 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" podUID="03233631-2567-42a5-af70-861afeefbba3" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.489584 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" podUID="3ed4f8f8-86dd-4331-b60d-ac713fe8be31" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.797475 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.797678 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9fr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fwdn9_openstack-operators(f78795e3-4b41-43ec-b56d-37745dd146cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:37 crc kubenswrapper[4922]: E0126 14:25:37.798907 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" podUID="f78795e3-4b41-43ec-b56d-37745dd146cd" Jan 26 14:25:38 crc kubenswrapper[4922]: E0126 14:25:38.400532 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 26 14:25:38 crc kubenswrapper[4922]: E0126 14:25:38.400963 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbjdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-wpq74_openstack-operators(3732fc65-c182-42c3-9a98-b9aff1d49a1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:38 crc kubenswrapper[4922]: E0126 14:25:38.402302 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" podUID="3732fc65-c182-42c3-9a98-b9aff1d49a1d" Jan 26 14:25:38 crc kubenswrapper[4922]: E0126 14:25:38.499055 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" podUID="f78795e3-4b41-43ec-b56d-37745dd146cd" Jan 26 14:25:39 crc kubenswrapper[4922]: E0126 14:25:39.688092 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 14:25:39 crc kubenswrapper[4922]: E0126 14:25:39.688274 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wnfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-7w6r2_openstack-operators(f3a5936d-5620-4b92-92ef-71b8387e019e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:25:39 crc kubenswrapper[4922]: E0126 14:25:39.690118 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" podUID="f3a5936d-5620-4b92-92ef-71b8387e019e" Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.306952 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.307348 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.307393 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.307978 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38f65164faf1c2f39140b3ebf8dc530554515c361f23b474730bf8efbdde8f32"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.308039 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://38f65164faf1c2f39140b3ebf8dc530554515c361f23b474730bf8efbdde8f32" gracePeriod=600 Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.532850 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerStarted","Data":"76f7a1194606ad37517df4fe231884efa47d28827eb9c3f5ac76f33f9a407ec7"} Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.533218 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c"] Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.539301 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="38f65164faf1c2f39140b3ebf8dc530554515c361f23b474730bf8efbdde8f32" exitCode=0 Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.539347 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"38f65164faf1c2f39140b3ebf8dc530554515c361f23b474730bf8efbdde8f32"} Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.539422 4922 scope.go:117] "RemoveContainer" containerID="4826d47a8978aad8de4d72d3370de97b0ca58dd44ff72fdfe2ad2319c73f0def" Jan 26 14:25:41 crc kubenswrapper[4922]: W0126 14:25:41.588381 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae2af37b_8945_48b3_8ed3_c2412b39c897.slice/crio-c84e130ad8760ac44392bb539f7bac48146079a9454e173cbd50303bacfd2193 WatchSource:0}: Error finding container c84e130ad8760ac44392bb539f7bac48146079a9454e173cbd50303bacfd2193: Status 404 returned error can't find the container with id c84e130ad8760ac44392bb539f7bac48146079a9454e173cbd50303bacfd2193 Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.591019 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x"] Jan 26 14:25:41 crc kubenswrapper[4922]: I0126 14:25:41.676840 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b6496445-44795"] Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.545375 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" event={"ID":"e741d752-bf89-4fc2-a173-98a5e6257ffc","Type":"ContainerStarted","Data":"20d9d0cb57682da8432d68ffeff0b8dca60ba4fc2a1bba94e32c7e3deaedf791"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.549401 4922 generic.go:334] "Generic (PLEG): container finished" podID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerID="76f7a1194606ad37517df4fe231884efa47d28827eb9c3f5ac76f33f9a407ec7" exitCode=0 Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.549561 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerDied","Data":"76f7a1194606ad37517df4fe231884efa47d28827eb9c3f5ac76f33f9a407ec7"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.551701 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" event={"ID":"ae2af37b-8945-48b3-8ed3-c2412b39c897","Type":"ContainerStarted","Data":"c84e130ad8760ac44392bb539f7bac48146079a9454e173cbd50303bacfd2193"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.561478 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" event={"ID":"3e944bff-02ee-4d1d-948b-350795772f18","Type":"ContainerStarted","Data":"a1eae3bfd0125ac4b0d80876de4c879b79d8509077bcc7dcc8d99633a60b945e"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.562467 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.579902 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" event={"ID":"f3aabab5-bdde-4359-b011-5887666ee21a","Type":"ContainerStarted","Data":"64c2fe830c10ea880007dd78a9a82e0b5bd7e23c9530980a2e37ed832ccf85a5"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.580948 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.585353 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" podStartSLOduration=6.977921824 podStartE2EDuration="34.585341566s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.167473021 +0000 UTC m=+927.369735793" lastFinishedPulling="2026-01-26 14:25:37.774892763 +0000 UTC m=+954.977155535" observedRunningTime="2026-01-26 14:25:42.581643313 +0000 UTC m=+959.783906085" watchObservedRunningTime="2026-01-26 14:25:42.585341566 +0000 UTC m=+959.787604338" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.599402 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" event={"ID":"3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e","Type":"ContainerStarted","Data":"a7055ff30f51029d351380b23a1d34eb92f022b6f0cee5d370adc0705b1737dc"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.600136 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.613560 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" event={"ID":"2dc5ea59-1467-4fec-b933-e144ea4fda4a","Type":"ContainerStarted","Data":"fc3c96639097e6c58d3244555a91f49250daf7d165a77e190b104f9554514dd5"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.613804 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.627820 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" podStartSLOduration=6.897303122 podStartE2EDuration="34.627805023s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.087831466 +0000 UTC m=+927.290094238" lastFinishedPulling="2026-01-26 14:25:37.818333357 +0000 UTC m=+955.020596139" observedRunningTime="2026-01-26 14:25:42.62559072 +0000 UTC m=+959.827853492" watchObservedRunningTime="2026-01-26 14:25:42.627805023 +0000 UTC m=+959.830067795" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.639176 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" event={"ID":"542bee92-421c-4969-9fb8-da684d74ab1d","Type":"ContainerStarted","Data":"54753c79408373a2cb9a485debde0ed42805ad88fcae4c5f756706f57b5c333c"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.639894 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.648394 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" event={"ID":"9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143","Type":"ContainerStarted","Data":"705b530d742f793a5e8f3d2997f5ade642025f08967d853471d6091287ec9524"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.648752 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.648997 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" podStartSLOduration=6.448528919 podStartE2EDuration="34.64897739s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.57471351 +0000 UTC m=+926.776976282" lastFinishedPulling="2026-01-26 14:25:37.775161981 +0000 UTC m=+954.977424753" observedRunningTime="2026-01-26 14:25:42.645632685 +0000 UTC m=+959.847895477" watchObservedRunningTime="2026-01-26 14:25:42.64897739 +0000 UTC m=+959.851240162" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.650496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" event={"ID":"2de91e12-3fbb-48e3-ac0f-55d98628405e","Type":"ContainerStarted","Data":"a7351a3219989e6f2ce36288f6a5761bf49a1a634a6998f9cca9e5119227d18b"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.666768 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" event={"ID":"98d7d86a-4bc1-4165-9dc5-3260b879df04","Type":"ContainerStarted","Data":"7f628d4f69744b10a675209f1fcd17408acda39685e0479264e0fe0bae67389b"} Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.667845 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.668294 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" podStartSLOduration=6.400208978 podStartE2EDuration="34.668279764s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.550202759 +0000 UTC m=+926.752465521" lastFinishedPulling="2026-01-26 14:25:37.818273535 +0000 UTC m=+955.020536307" observedRunningTime="2026-01-26 14:25:42.66817708 +0000 UTC m=+959.870439842" watchObservedRunningTime="2026-01-26 14:25:42.668279764 +0000 UTC m=+959.870542536" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.726511 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" podStartSLOduration=6.785599565 podStartE2EDuration="34.726493443s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.833738538 +0000 UTC m=+927.036001300" lastFinishedPulling="2026-01-26 14:25:37.774632396 +0000 UTC m=+954.976895178" observedRunningTime="2026-01-26 14:25:42.694338947 +0000 UTC m=+959.896601719" watchObservedRunningTime="2026-01-26 14:25:42.726493443 +0000 UTC m=+959.928756215" Jan 26 14:25:42 crc kubenswrapper[4922]: I0126 14:25:42.728930 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" podStartSLOduration=7.603398316 podStartE2EDuration="34.728914091s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.090225384 +0000 UTC m=+927.292488156" lastFinishedPulling="2026-01-26 14:25:37.215741169 +0000 UTC m=+954.418003931" observedRunningTime="2026-01-26 14:25:42.726330968 +0000 UTC m=+959.928593740" watchObservedRunningTime="2026-01-26 14:25:42.728914091 +0000 UTC m=+959.931176863" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.274781 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" podStartSLOduration=8.058418737 podStartE2EDuration="35.274762181s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.558181959 +0000 UTC m=+927.760444731" lastFinishedPulling="2026-01-26 14:25:37.774525363 +0000 UTC m=+954.976788175" observedRunningTime="2026-01-26 14:25:42.749379608 +0000 UTC m=+959.951642380" watchObservedRunningTime="2026-01-26 14:25:43.274762181 +0000 UTC m=+960.477024953" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.681772 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" event={"ID":"edd25ba7-355c-48aa-a7f5-0a60df9f1307","Type":"ContainerStarted","Data":"370d1815bee1ed5eaed395a107379ad52e4cfd88e9fc3090ea749a4a9fa7e210"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.682706 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.700520 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"579737a5aa8bb32a4f554c6e647711e28b3e50a7ec3de0bd2d82dee5d94940f2"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.704565 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" event={"ID":"2203be8d-8aa1-4617-8297-c715783969a6","Type":"ContainerStarted","Data":"94d78a1676582cdd3061b02ce07792dc7cc40ef6ee2b702abf6fd3e9400e3ab5"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.704876 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.706908 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" event={"ID":"2de91e12-3fbb-48e3-ac0f-55d98628405e","Type":"ContainerStarted","Data":"53ad42101644ebbd186a21f60367fe94974ff3c5fec329867b37b7e19be197d1"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.706968 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.720613 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" event={"ID":"aa376169-3b34-4289-b339-14fc6f14a0e9","Type":"ContainerStarted","Data":"1f04b867ee2aa651dafa70478e2e9e12d68c594acbf76e6ec680ff269515b38a"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.721097 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.732350 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" event={"ID":"033a8dae-299b-49cc-a63e-2d4bf250488c","Type":"ContainerStarted","Data":"8e0cb24ac8530ccfa776541fc942e0cf00adb656cdf971bbbce8caf7f5ca1b18"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.733003 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.737516 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" podStartSLOduration=4.03682913 podStartE2EDuration="35.737499388s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.622781414 +0000 UTC m=+926.825044186" lastFinishedPulling="2026-01-26 14:25:41.323451672 +0000 UTC m=+958.525714444" observedRunningTime="2026-01-26 14:25:43.737453067 +0000 UTC m=+960.939715839" watchObservedRunningTime="2026-01-26 14:25:43.737499388 +0000 UTC m=+960.939762160" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.812735 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" event={"ID":"eb93770e-722e-474d-93ef-5767d506fbf5","Type":"ContainerStarted","Data":"1e076e2232413fd9c48a0ee25be2cfcba21e41153156094d9cc2c2f3694e892a"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.813589 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.840165 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" podStartSLOduration=4.267555639 podStartE2EDuration="35.84014635s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.47449276 +0000 UTC m=+927.676755532" lastFinishedPulling="2026-01-26 14:25:42.047083471 +0000 UTC m=+959.249346243" observedRunningTime="2026-01-26 14:25:43.836684083 +0000 UTC m=+961.038946855" watchObservedRunningTime="2026-01-26 14:25:43.84014635 +0000 UTC m=+961.042409122" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.841648 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" event={"ID":"d487712f-146f-4342-a84e-6dca10b381fe","Type":"ContainerStarted","Data":"1ed45844bd93272830168dc1d399dbda047a0fb605222d2e663af99518ba353c"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.841982 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.845630 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" event={"ID":"fa8912b6-c04f-4a1e-bb7a-8cae762f00ab","Type":"ContainerStarted","Data":"234a2449c234c55f30a9954533d1065760b68db90604cb9eba0cfda4874b587d"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.846704 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.862667 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" podStartSLOduration=4.3679774590000005 podStartE2EDuration="35.862646344s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.8274547 +0000 UTC m=+927.029717472" lastFinishedPulling="2026-01-26 14:25:41.322123585 +0000 UTC m=+958.524386357" observedRunningTime="2026-01-26 14:25:43.861570123 +0000 UTC m=+961.063832905" watchObservedRunningTime="2026-01-26 14:25:43.862646344 +0000 UTC m=+961.064909116" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.890080 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" podStartSLOduration=4.877941128 podStartE2EDuration="35.890048386s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.538138164 +0000 UTC m=+927.740400936" lastFinishedPulling="2026-01-26 14:25:41.550245422 +0000 UTC m=+958.752508194" observedRunningTime="2026-01-26 14:25:43.889885082 +0000 UTC m=+961.092147854" watchObservedRunningTime="2026-01-26 14:25:43.890048386 +0000 UTC m=+961.092311148" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.898079 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerStarted","Data":"71ee5f449fe51ea737923951529c7b09faae65879244f29087f81a951e0c526d"} Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.939618 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" podStartSLOduration=34.939603752 podStartE2EDuration="34.939603752s" podCreationTimestamp="2026-01-26 14:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:25:43.935535578 +0000 UTC m=+961.137798350" watchObservedRunningTime="2026-01-26 14:25:43.939603752 +0000 UTC m=+961.141866524" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.958418 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gcrjk" podStartSLOduration=3.190094812 podStartE2EDuration="33.958398442s" podCreationTimestamp="2026-01-26 14:25:10 +0000 UTC" firstStartedPulling="2026-01-26 14:25:12.252917117 +0000 UTC m=+929.455179889" lastFinishedPulling="2026-01-26 14:25:43.021220747 +0000 UTC m=+960.223483519" observedRunningTime="2026-01-26 14:25:43.957465356 +0000 UTC m=+961.159728118" watchObservedRunningTime="2026-01-26 14:25:43.958398442 +0000 UTC m=+961.160661214" Jan 26 14:25:43 crc kubenswrapper[4922]: I0126 14:25:43.992935 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" podStartSLOduration=3.69223394 podStartE2EDuration="35.992920434s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.774958572 +0000 UTC m=+926.977221344" lastFinishedPulling="2026-01-26 14:25:42.075645066 +0000 UTC m=+959.277907838" observedRunningTime="2026-01-26 14:25:43.985413733 +0000 UTC m=+961.187676505" watchObservedRunningTime="2026-01-26 14:25:43.992920434 +0000 UTC m=+961.195183206" Jan 26 14:25:44 crc kubenswrapper[4922]: I0126 14:25:44.017964 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" podStartSLOduration=4.398078028 podStartE2EDuration="36.017948859s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.631724587 +0000 UTC m=+926.833987359" lastFinishedPulling="2026-01-26 14:25:41.251595418 +0000 UTC m=+958.453858190" observedRunningTime="2026-01-26 14:25:44.016500589 +0000 UTC m=+961.218763361" watchObservedRunningTime="2026-01-26 14:25:44.017948859 +0000 UTC m=+961.220211631" Jan 26 14:25:44 crc kubenswrapper[4922]: I0126 14:25:44.039291 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" podStartSLOduration=5.085903628 podStartE2EDuration="36.039275361s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.545832701 +0000 UTC m=+927.748095473" lastFinishedPulling="2026-01-26 14:25:41.499204434 +0000 UTC m=+958.701467206" observedRunningTime="2026-01-26 14:25:44.036443591 +0000 UTC m=+961.238706363" watchObservedRunningTime="2026-01-26 14:25:44.039275361 +0000 UTC m=+961.241538133" Jan 26 14:25:46 crc kubenswrapper[4922]: I0126 14:25:46.924316 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" event={"ID":"ae2af37b-8945-48b3-8ed3-c2412b39c897","Type":"ContainerStarted","Data":"4b5f2138a6af6197b1ac462de7c278178afe0141ecaebb0a7441ce4c42578c03"} Jan 26 14:25:46 crc kubenswrapper[4922]: I0126 14:25:46.924991 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:46 crc kubenswrapper[4922]: I0126 14:25:46.925511 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" event={"ID":"e741d752-bf89-4fc2-a173-98a5e6257ffc","Type":"ContainerStarted","Data":"3c14b6a264726e210cb3e506a799776b9b2c51e297c14367fffb8f3ec30875fa"} Jan 26 14:25:46 crc kubenswrapper[4922]: I0126 14:25:46.926337 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:46 crc kubenswrapper[4922]: I0126 14:25:46.958058 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" podStartSLOduration=34.786678229 podStartE2EDuration="38.958037196s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:41.94411549 +0000 UTC m=+959.146378292" lastFinishedPulling="2026-01-26 14:25:46.115474487 +0000 UTC m=+963.317737259" observedRunningTime="2026-01-26 14:25:46.954437935 +0000 UTC m=+964.156700707" watchObservedRunningTime="2026-01-26 14:25:46.958037196 +0000 UTC m=+964.160299978" Jan 26 14:25:46 crc kubenswrapper[4922]: I0126 14:25:46.987997 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" podStartSLOduration=34.901807272 podStartE2EDuration="38.987983619s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:42.014931205 +0000 UTC m=+959.217194017" lastFinishedPulling="2026-01-26 14:25:46.101107592 +0000 UTC m=+963.303370364" observedRunningTime="2026-01-26 14:25:46.986457217 +0000 UTC m=+964.188719989" watchObservedRunningTime="2026-01-26 14:25:46.987983619 +0000 UTC m=+964.190246381" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.547783 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-d4kf8" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.590322 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6pjpc" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.607515 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-sthwh" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.630872 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-9xxqk" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.653323 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-grh8g" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.664186 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fr5t7" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.810935 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-vbmsc" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.908282 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-lvrq9" Jan 26 14:25:48 crc kubenswrapper[4922]: I0126 14:25:48.942418 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk" Jan 26 14:25:49 crc kubenswrapper[4922]: I0126 14:25:49.018657 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-n9bp2" Jan 26 14:25:49 crc kubenswrapper[4922]: I0126 14:25:49.132595 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-2mvbc" Jan 26 14:25:49 crc kubenswrapper[4922]: I0126 14:25:49.269325 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h4zkz" Jan 26 14:25:49 crc kubenswrapper[4922]: I0126 14:25:49.338567 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8tk4x" Jan 26 14:25:49 crc kubenswrapper[4922]: I0126 14:25:49.398905 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75d4cf59bb-dctt2" Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.959823 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" event={"ID":"f78795e3-4b41-43ec-b56d-37745dd146cd","Type":"ContainerStarted","Data":"fe77649d75103cd6a85141821ba2c6c78353d179bf1ce9ce29117a85b92ae3de"} Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.962730 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" event={"ID":"3ed4f8f8-86dd-4331-b60d-ac713fe8be31","Type":"ContainerStarted","Data":"a053253c8999765557ede5c0696e71675a6aedbfeb772030c2f03efdfad9834f"} Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.962916 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.965013 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" event={"ID":"94e756c6-328c-4065-9d81-2cd1f5293a0a","Type":"ContainerStarted","Data":"00e1573cb574e5b454d9bc063441078d41897a6555a18dfea3020540ae2917dd"} Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.965254 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.966808 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" event={"ID":"03233631-2567-42a5-af70-861afeefbba3","Type":"ContainerStarted","Data":"224760914f81211ddf2d10d4c1afde7e06822c57a2f099e9cf2a0f6ec1a67882"} Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.967005 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:50 crc kubenswrapper[4922]: I0126 14:25:50.990922 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fwdn9" podStartSLOduration=1.9361949489999999 podStartE2EDuration="41.99090315s" podCreationTimestamp="2026-01-26 14:25:09 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.473943115 +0000 UTC m=+927.676205887" lastFinishedPulling="2026-01-26 14:25:50.528651316 +0000 UTC m=+967.730914088" observedRunningTime="2026-01-26 14:25:50.986408064 +0000 UTC m=+968.188670836" watchObservedRunningTime="2026-01-26 14:25:50.99090315 +0000 UTC m=+968.193165932" Jan 26 14:25:51 crc kubenswrapper[4922]: I0126 14:25:51.023137 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" podStartSLOduration=2.58270212 podStartE2EDuration="43.023113218s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.08827983 +0000 UTC m=+927.290542602" lastFinishedPulling="2026-01-26 14:25:50.528690908 +0000 UTC m=+967.730953700" observedRunningTime="2026-01-26 14:25:51.013612791 +0000 UTC m=+968.215875563" watchObservedRunningTime="2026-01-26 14:25:51.023113218 +0000 UTC m=+968.225376000" Jan 26 14:25:51 crc kubenswrapper[4922]: I0126 14:25:51.045358 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" podStartSLOduration=3.123341791 podStartE2EDuration="43.045337464s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.070535849 +0000 UTC m=+927.272798621" lastFinishedPulling="2026-01-26 14:25:49.992531492 +0000 UTC m=+967.194794294" observedRunningTime="2026-01-26 14:25:51.037097112 +0000 UTC m=+968.239359894" watchObservedRunningTime="2026-01-26 14:25:51.045337464 +0000 UTC m=+968.247600256" Jan 26 14:25:51 crc kubenswrapper[4922]: I0126 14:25:51.057018 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" podStartSLOduration=3.085949008 podStartE2EDuration="43.057002063s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:09.922862879 +0000 UTC m=+927.125125651" lastFinishedPulling="2026-01-26 14:25:49.893915894 +0000 UTC m=+967.096178706" observedRunningTime="2026-01-26 14:25:51.051667672 +0000 UTC m=+968.253930454" watchObservedRunningTime="2026-01-26 14:25:51.057002063 +0000 UTC m=+968.259264835" Jan 26 14:25:51 crc kubenswrapper[4922]: I0126 14:25:51.303479 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:51 crc kubenswrapper[4922]: I0126 14:25:51.303764 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:51 crc kubenswrapper[4922]: I0126 14:25:51.340436 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:52 crc kubenswrapper[4922]: I0126 14:25:52.030324 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:52 crc kubenswrapper[4922]: I0126 14:25:52.077713 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gcrjk"] Jan 26 14:25:53 crc kubenswrapper[4922]: E0126 14:25:53.100051 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" podUID="3732fc65-c182-42c3-9a98-b9aff1d49a1d" Jan 26 14:25:53 crc kubenswrapper[4922]: I0126 14:25:53.987324 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gcrjk" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="registry-server" containerID="cri-o://71ee5f449fe51ea737923951529c7b09faae65879244f29087f81a951e0c526d" gracePeriod=2 Jan 26 14:25:54 crc kubenswrapper[4922]: I0126 14:25:54.688772 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xwz7c" Jan 26 14:25:54 crc kubenswrapper[4922]: I0126 14:25:54.690260 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x" Jan 26 14:25:55 crc kubenswrapper[4922]: E0126 14:25:55.094361 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" podUID="f3a5936d-5620-4b92-92ef-71b8387e019e" Jan 26 14:25:55 crc kubenswrapper[4922]: I0126 14:25:55.132950 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b6496445-44795" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.009748 4922 generic.go:334] "Generic (PLEG): container finished" podID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerID="71ee5f449fe51ea737923951529c7b09faae65879244f29087f81a951e0c526d" exitCode=0 Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.009794 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerDied","Data":"71ee5f449fe51ea737923951529c7b09faae65879244f29087f81a951e0c526d"} Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.324041 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.420794 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-catalog-content\") pod \"2153c0a7-1535-4cff-a812-8a380c6cae80\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.420832 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-utilities\") pod \"2153c0a7-1535-4cff-a812-8a380c6cae80\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.420869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdwn\" (UniqueName: \"kubernetes.io/projected/2153c0a7-1535-4cff-a812-8a380c6cae80-kube-api-access-2wdwn\") pod \"2153c0a7-1535-4cff-a812-8a380c6cae80\" (UID: \"2153c0a7-1535-4cff-a812-8a380c6cae80\") " Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.421736 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-utilities" (OuterVolumeSpecName: "utilities") pod "2153c0a7-1535-4cff-a812-8a380c6cae80" (UID: "2153c0a7-1535-4cff-a812-8a380c6cae80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.421972 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.425852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2153c0a7-1535-4cff-a812-8a380c6cae80-kube-api-access-2wdwn" (OuterVolumeSpecName: "kube-api-access-2wdwn") pod "2153c0a7-1535-4cff-a812-8a380c6cae80" (UID: "2153c0a7-1535-4cff-a812-8a380c6cae80"). InnerVolumeSpecName "kube-api-access-2wdwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.476468 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2153c0a7-1535-4cff-a812-8a380c6cae80" (UID: "2153c0a7-1535-4cff-a812-8a380c6cae80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.523511 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2153c0a7-1535-4cff-a812-8a380c6cae80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:56 crc kubenswrapper[4922]: I0126 14:25:56.523545 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wdwn\" (UniqueName: \"kubernetes.io/projected/2153c0a7-1535-4cff-a812-8a380c6cae80-kube-api-access-2wdwn\") on node \"crc\" DevicePath \"\"" Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.024157 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcrjk" event={"ID":"2153c0a7-1535-4cff-a812-8a380c6cae80","Type":"ContainerDied","Data":"de2f62979b659c39a0acfad814aa1c30d51bad065d2e369b3fa731cc276728ec"} Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.024222 4922 scope.go:117] "RemoveContainer" containerID="71ee5f449fe51ea737923951529c7b09faae65879244f29087f81a951e0c526d" Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.024272 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcrjk" Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.072323 4922 scope.go:117] "RemoveContainer" containerID="76f7a1194606ad37517df4fe231884efa47d28827eb9c3f5ac76f33f9a407ec7" Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.081015 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gcrjk"] Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.103549 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gcrjk"] Jan 26 14:25:57 crc kubenswrapper[4922]: I0126 14:25:57.107014 4922 scope.go:117] "RemoveContainer" containerID="a7c0d2c1f121efdeb5360730bc4719dd1402b888f90dd3afd5b8940336c4930c" Jan 26 14:25:57 crc kubenswrapper[4922]: E0126 14:25:57.215128 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2153c0a7_1535_4cff_a812_8a380c6cae80.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:25:58 crc kubenswrapper[4922]: I0126 14:25:58.829580 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-s2hjs" Jan 26 14:25:59 crc kubenswrapper[4922]: I0126 14:25:59.002992 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-vg2c5" Jan 26 14:25:59 crc kubenswrapper[4922]: I0126 14:25:59.052444 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-npcs6" Jan 26 14:25:59 crc kubenswrapper[4922]: I0126 14:25:59.109771 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" path="/var/lib/kubelet/pods/2153c0a7-1535-4cff-a812-8a380c6cae80/volumes" Jan 26 14:26:10 crc kubenswrapper[4922]: I0126 14:26:10.140087 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" event={"ID":"3732fc65-c182-42c3-9a98-b9aff1d49a1d","Type":"ContainerStarted","Data":"59af6cf531f77348c962b50d3978747660070aa9f3d9d18a3e1e919e014482d4"} Jan 26 14:26:10 crc kubenswrapper[4922]: I0126 14:26:10.140708 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:26:10 crc kubenswrapper[4922]: I0126 14:26:10.160987 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" podStartSLOduration=2.940433137 podStartE2EDuration="1m2.160952856s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.306782256 +0000 UTC m=+927.509045048" lastFinishedPulling="2026-01-26 14:26:09.527301955 +0000 UTC m=+986.729564767" observedRunningTime="2026-01-26 14:26:10.160305999 +0000 UTC m=+987.362568771" watchObservedRunningTime="2026-01-26 14:26:10.160952856 +0000 UTC m=+987.363215638" Jan 26 14:26:11 crc kubenswrapper[4922]: I0126 14:26:11.151095 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" event={"ID":"f3a5936d-5620-4b92-92ef-71b8387e019e","Type":"ContainerStarted","Data":"49876a16df5807733a561ea31b84682c478ce305d527693fc03570e1efee4625"} Jan 26 14:26:11 crc kubenswrapper[4922]: I0126 14:26:11.151787 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:26:11 crc kubenswrapper[4922]: I0126 14:26:11.167966 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" podStartSLOduration=2.7768406519999997 podStartE2EDuration="1m3.167940923s" podCreationTimestamp="2026-01-26 14:25:08 +0000 UTC" firstStartedPulling="2026-01-26 14:25:10.107412048 +0000 UTC m=+927.309674820" lastFinishedPulling="2026-01-26 14:26:10.498512319 +0000 UTC m=+987.700775091" observedRunningTime="2026-01-26 14:26:11.165606131 +0000 UTC m=+988.367868943" watchObservedRunningTime="2026-01-26 14:26:11.167940923 +0000 UTC m=+988.370203725" Jan 26 14:26:19 crc kubenswrapper[4922]: I0126 14:26:19.570830 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7w6r2" Jan 26 14:26:19 crc kubenswrapper[4922]: I0126 14:26:19.571418 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-wpq74" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.185435 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-654788c785-87hph"] Jan 26 14:26:38 crc kubenswrapper[4922]: E0126 14:26:38.187296 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="registry-server" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.187314 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="registry-server" Jan 26 14:26:38 crc kubenswrapper[4922]: E0126 14:26:38.187325 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="extract-content" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.187331 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="extract-content" Jan 26 14:26:38 crc kubenswrapper[4922]: E0126 14:26:38.187372 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="extract-utilities" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.187380 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="extract-utilities" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.187513 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2153c0a7-1535-4cff-a812-8a380c6cae80" containerName="registry-server" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.188377 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.190423 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-x2p8s" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.191668 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.191841 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.192014 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.210457 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-654788c785-87hph"] Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.258208 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7557798fb9-frpcn"] Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.265506 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.269982 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.283730 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7482af4-1f1d-495b-bace-425b679e778d-config\") pod \"dnsmasq-dns-654788c785-87hph\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.283827 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvr5\" (UniqueName: \"kubernetes.io/projected/d7482af4-1f1d-495b-bace-425b679e778d-kube-api-access-xfvr5\") pod \"dnsmasq-dns-654788c785-87hph\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.297116 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7557798fb9-frpcn"] Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.385518 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfvr5\" (UniqueName: \"kubernetes.io/projected/d7482af4-1f1d-495b-bace-425b679e778d-kube-api-access-xfvr5\") pod \"dnsmasq-dns-654788c785-87hph\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.385581 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js4hn\" (UniqueName: \"kubernetes.io/projected/b6060142-97a0-42ea-8c06-94648c7e9839-kube-api-access-js4hn\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.385630 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-config\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.385702 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7482af4-1f1d-495b-bace-425b679e778d-config\") pod \"dnsmasq-dns-654788c785-87hph\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.385724 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-dns-svc\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.386548 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7482af4-1f1d-495b-bace-425b679e778d-config\") pod \"dnsmasq-dns-654788c785-87hph\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.410035 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfvr5\" (UniqueName: \"kubernetes.io/projected/d7482af4-1f1d-495b-bace-425b679e778d-kube-api-access-xfvr5\") pod \"dnsmasq-dns-654788c785-87hph\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.486867 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-dns-svc\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.486924 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js4hn\" (UniqueName: \"kubernetes.io/projected/b6060142-97a0-42ea-8c06-94648c7e9839-kube-api-access-js4hn\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.487283 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-config\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.487720 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-dns-svc\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.487966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-config\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.502535 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js4hn\" (UniqueName: \"kubernetes.io/projected/b6060142-97a0-42ea-8c06-94648c7e9839-kube-api-access-js4hn\") pod \"dnsmasq-dns-7557798fb9-frpcn\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.506311 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:38 crc kubenswrapper[4922]: I0126 14:26:38.612819 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:39 crc kubenswrapper[4922]: I0126 14:26:38.999599 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-654788c785-87hph"] Jan 26 14:26:39 crc kubenswrapper[4922]: W0126 14:26:39.004136 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7482af4_1f1d_495b_bace_425b679e778d.slice/crio-30e70bfd9982b35fa47c5ba45fb6feb67105679b04d5486e58bb027a538aa7f7 WatchSource:0}: Error finding container 30e70bfd9982b35fa47c5ba45fb6feb67105679b04d5486e58bb027a538aa7f7: Status 404 returned error can't find the container with id 30e70bfd9982b35fa47c5ba45fb6feb67105679b04d5486e58bb027a538aa7f7 Jan 26 14:26:39 crc kubenswrapper[4922]: W0126 14:26:39.110232 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6060142_97a0_42ea_8c06_94648c7e9839.slice/crio-c495c53c24d2d13fa8825fedc197f6be916d1d1ea4826f3f2e5aca5bc8ab6cc9 WatchSource:0}: Error finding container c495c53c24d2d13fa8825fedc197f6be916d1d1ea4826f3f2e5aca5bc8ab6cc9: Status 404 returned error can't find the container with id c495c53c24d2d13fa8825fedc197f6be916d1d1ea4826f3f2e5aca5bc8ab6cc9 Jan 26 14:26:39 crc kubenswrapper[4922]: I0126 14:26:39.113111 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7557798fb9-frpcn"] Jan 26 14:26:39 crc kubenswrapper[4922]: I0126 14:26:39.765647 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654788c785-87hph" event={"ID":"d7482af4-1f1d-495b-bace-425b679e778d","Type":"ContainerStarted","Data":"30e70bfd9982b35fa47c5ba45fb6feb67105679b04d5486e58bb027a538aa7f7"} Jan 26 14:26:39 crc kubenswrapper[4922]: I0126 14:26:39.767907 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" event={"ID":"b6060142-97a0-42ea-8c06-94648c7e9839","Type":"ContainerStarted","Data":"c495c53c24d2d13fa8825fedc197f6be916d1d1ea4826f3f2e5aca5bc8ab6cc9"} Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.052721 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-654788c785-87hph"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.073959 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f57bd8fc-kqsz6"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.075172 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.085922 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f57bd8fc-kqsz6"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.160862 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv87g\" (UniqueName: \"kubernetes.io/projected/b82db84b-2c45-49a4-bb32-3b749a9373fc-kube-api-access-sv87g\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.160980 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-dns-svc\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.161015 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-config\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.264892 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv87g\" (UniqueName: \"kubernetes.io/projected/b82db84b-2c45-49a4-bb32-3b749a9373fc-kube-api-access-sv87g\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.265256 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-dns-svc\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.265289 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-config\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.266033 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-config\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.266133 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-dns-svc\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.308402 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv87g\" (UniqueName: \"kubernetes.io/projected/b82db84b-2c45-49a4-bb32-3b749a9373fc-kube-api-access-sv87g\") pod \"dnsmasq-dns-9f57bd8fc-kqsz6\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.310955 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7557798fb9-frpcn"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.340043 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbb78b96f-cnmc7"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.343514 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.370601 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbb78b96f-cnmc7"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.402379 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.471884 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-config\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.471929 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-dns-svc\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.471991 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj7kf\" (UniqueName: \"kubernetes.io/projected/dbe483bb-5b30-427c-9037-52eba4d89d86-kube-api-access-wj7kf\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.596894 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-config\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.596936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-dns-svc\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.596971 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj7kf\" (UniqueName: \"kubernetes.io/projected/dbe483bb-5b30-427c-9037-52eba4d89d86-kube-api-access-wj7kf\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.597851 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-config\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.598094 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-dns-svc\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.606791 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbb78b96f-cnmc7"] Jan 26 14:26:42 crc kubenswrapper[4922]: E0126 14:26:42.607269 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-wj7kf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" podUID="dbe483bb-5b30-427c-9037-52eba4d89d86" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.617690 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d9f4b769-kl9q7"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.618782 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.626505 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d9f4b769-kl9q7"] Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.629848 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj7kf\" (UniqueName: \"kubernetes.io/projected/dbe483bb-5b30-427c-9037-52eba4d89d86-kube-api-access-wj7kf\") pod \"dnsmasq-dns-5fbb78b96f-cnmc7\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.792615 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.800739 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-dns-svc\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.800834 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-config\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.800903 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz85x\" (UniqueName: \"kubernetes.io/projected/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-kube-api-access-nz85x\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.805202 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.904161 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj7kf\" (UniqueName: \"kubernetes.io/projected/dbe483bb-5b30-427c-9037-52eba4d89d86-kube-api-access-wj7kf\") pod \"dbe483bb-5b30-427c-9037-52eba4d89d86\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.904243 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-config\") pod \"dbe483bb-5b30-427c-9037-52eba4d89d86\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.904268 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-dns-svc\") pod \"dbe483bb-5b30-427c-9037-52eba4d89d86\" (UID: \"dbe483bb-5b30-427c-9037-52eba4d89d86\") " Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.904446 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz85x\" (UniqueName: \"kubernetes.io/projected/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-kube-api-access-nz85x\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.904478 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-dns-svc\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.904520 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-config\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.905030 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbe483bb-5b30-427c-9037-52eba4d89d86" (UID: "dbe483bb-5b30-427c-9037-52eba4d89d86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.905478 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-config" (OuterVolumeSpecName: "config") pod "dbe483bb-5b30-427c-9037-52eba4d89d86" (UID: "dbe483bb-5b30-427c-9037-52eba4d89d86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.906077 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-config\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.906926 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-dns-svc\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.907478 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe483bb-5b30-427c-9037-52eba4d89d86-kube-api-access-wj7kf" (OuterVolumeSpecName: "kube-api-access-wj7kf") pod "dbe483bb-5b30-427c-9037-52eba4d89d86" (UID: "dbe483bb-5b30-427c-9037-52eba4d89d86"). InnerVolumeSpecName "kube-api-access-wj7kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.921696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz85x\" (UniqueName: \"kubernetes.io/projected/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-kube-api-access-nz85x\") pod \"dnsmasq-dns-57d9f4b769-kl9q7\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:42 crc kubenswrapper[4922]: I0126 14:26:42.970484 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.005630 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj7kf\" (UniqueName: \"kubernetes.io/projected/dbe483bb-5b30-427c-9037-52eba4d89d86-kube-api-access-wj7kf\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.005661 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.005670 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe483bb-5b30-427c-9037-52eba4d89d86-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.211677 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.214246 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222302 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222300 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222376 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222517 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qfw56" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222657 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222701 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.222841 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.231841 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.309836 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.309894 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.309936 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.309979 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310013 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310047 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310101 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310129 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbqjs\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-kube-api-access-wbqjs\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310152 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310182 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.310224 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.411967 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412038 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412094 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412124 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbqjs\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-kube-api-access-wbqjs\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412150 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412556 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412620 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.412983 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413268 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413386 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413408 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413464 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413522 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413463 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.413841 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.414134 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.414425 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.419540 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.421001 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.421547 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.426104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbqjs\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-kube-api-access-wbqjs\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.427012 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.435687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.470389 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.471798 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.473371 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.476407 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jszrh" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.476763 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.476886 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.480777 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.480810 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.481017 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.486120 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.532131 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615657 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3ea763a-f09f-435f-b75d-69e3b9160943-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615703 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615747 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615762 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615785 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615807 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbw8h\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-kube-api-access-fbw8h\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615859 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615875 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.615895 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.616158 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3ea763a-f09f-435f-b75d-69e3b9160943-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.721027 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.721992 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3ea763a-f09f-435f-b75d-69e3b9160943-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722054 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722100 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722119 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722165 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722194 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbw8h\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-kube-api-access-fbw8h\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722235 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722239 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722254 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722276 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722325 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3ea763a-f09f-435f-b75d-69e3b9160943-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.722798 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.723697 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.724709 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.725155 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.728493 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.729324 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.729841 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3ea763a-f09f-435f-b75d-69e3b9160943-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.730626 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3ea763a-f09f-435f-b75d-69e3b9160943-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.731517 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.732323 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.732521 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.732816 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.732863 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.732985 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.733175 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.733530 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.733912 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-ljmlw" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.735215 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.751833 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbw8h\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-kube-api-access-fbw8h\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.763218 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.795690 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.798870 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb78b96f-cnmc7" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826211 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826290 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826345 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826427 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826458 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkd6l\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-kube-api-access-xkd6l\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826490 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826536 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826803 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1881b31a-fd0f-40c8-a098-10888cec43db-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.826955 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.827128 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1881b31a-fd0f-40c8-a098-10888cec43db-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.828853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.849323 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbb78b96f-cnmc7"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.862753 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbb78b96f-cnmc7"] Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930677 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930734 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930774 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1881b31a-fd0f-40c8-a098-10888cec43db-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930820 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930867 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1881b31a-fd0f-40c8-a098-10888cec43db-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930892 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930920 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.930986 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.931030 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.931056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkd6l\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-kube-api-access-xkd6l\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.932226 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.932416 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.932587 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.932860 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.933304 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.933711 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1881b31a-fd0f-40c8-a098-10888cec43db-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.936174 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.938130 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.940621 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1881b31a-fd0f-40c8-a098-10888cec43db-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.950990 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkd6l\" (UniqueName: \"kubernetes.io/projected/1881b31a-fd0f-40c8-a098-10888cec43db-kube-api-access-xkd6l\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.952463 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1881b31a-fd0f-40c8-a098-10888cec43db-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:43 crc kubenswrapper[4922]: I0126 14:26:43.966077 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"1881b31a-fd0f-40c8-a098-10888cec43db\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:44 crc kubenswrapper[4922]: I0126 14:26:44.118613 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.105115 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe483bb-5b30-427c-9037-52eba4d89d86" path="/var/lib/kubelet/pods/dbe483bb-5b30-427c-9037-52eba4d89d86/volumes" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.268674 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.270268 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.273232 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.273465 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.275887 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dxndg" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.276170 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.276403 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.294234 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.355515 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.355572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-config-data-default\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.355712 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.355903 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-kolla-config\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.355942 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fks9\" (UniqueName: \"kubernetes.io/projected/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-kube-api-access-6fks9\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.356082 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.356171 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.356219 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457209 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457260 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457278 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-config-data-default\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457383 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-kolla-config\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457400 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fks9\" (UniqueName: \"kubernetes.io/projected/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-kube-api-access-6fks9\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457420 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.457442 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.458744 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.458917 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.459574 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.460273 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-config-data-default\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.460564 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-kolla-config\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.466328 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.478803 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fks9\" (UniqueName: \"kubernetes.io/projected/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-kube-api-access-6fks9\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.478900 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/205c6bf6-b838-4bea-9cf8-df9fe42bd53f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.486610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"205c6bf6-b838-4bea-9cf8-df9fe42bd53f\") " pod="openstack/openstack-galera-0" Jan 26 14:26:45 crc kubenswrapper[4922]: I0126 14:26:45.599205 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.668106 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.669572 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.672794 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.673035 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-s77j6" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.673619 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.673633 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.682241 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.778602 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k54gh\" (UniqueName: \"kubernetes.io/projected/729a7732-744d-4ef7-b2c5-054f0f5f7f79-kube-api-access-k54gh\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.778652 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.778716 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729a7732-744d-4ef7-b2c5-054f0f5f7f79-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.778851 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.779032 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.779089 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/729a7732-744d-4ef7-b2c5-054f0f5f7f79-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.779159 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/729a7732-744d-4ef7-b2c5-054f0f5f7f79-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.779193 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.880788 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.880849 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/729a7732-744d-4ef7-b2c5-054f0f5f7f79-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.880905 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/729a7732-744d-4ef7-b2c5-054f0f5f7f79-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.880940 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.880994 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k54gh\" (UniqueName: \"kubernetes.io/projected/729a7732-744d-4ef7-b2c5-054f0f5f7f79-kube-api-access-k54gh\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.881016 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.885329 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729a7732-744d-4ef7-b2c5-054f0f5f7f79-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.885421 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.885659 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.886037 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/729a7732-744d-4ef7-b2c5-054f0f5f7f79-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.886152 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.886181 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.927837 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k54gh\" (UniqueName: \"kubernetes.io/projected/729a7732-744d-4ef7-b2c5-054f0f5f7f79-kube-api-access-k54gh\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.938646 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.943900 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.945612 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.950869 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.951266 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.951420 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2vn24" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.960546 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.986671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.986738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-config-data\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.986770 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr5v\" (UniqueName: \"kubernetes.io/projected/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-kube-api-access-kcr5v\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.986801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:46 crc kubenswrapper[4922]: I0126 14:26:46.986841 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-kolla-config\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.087983 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-kolla-config\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.088155 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.088200 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-config-data\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.088229 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcr5v\" (UniqueName: \"kubernetes.io/projected/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-kube-api-access-kcr5v\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.088273 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.089148 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-kolla-config\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.089470 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-config-data\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.091429 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.094033 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.106078 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcr5v\" (UniqueName: \"kubernetes.io/projected/9cb3b1de-0efe-4de9-9e48-6f2f6885c197-kube-api-access-kcr5v\") pod \"memcached-0\" (UID: \"9cb3b1de-0efe-4de9-9e48-6f2f6885c197\") " pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.297965 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.564378 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729a7732-744d-4ef7-b2c5-054f0f5f7f79-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.569796 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/729a7732-744d-4ef7-b2c5-054f0f5f7f79-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.570459 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729a7732-744d-4ef7-b2c5-054f0f5f7f79-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"729a7732-744d-4ef7-b2c5-054f0f5f7f79\") " pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:47 crc kubenswrapper[4922]: I0126 14:26:47.597780 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.712538 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.714420 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.716690 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-97fj2" Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.736518 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.817421 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvq4\" (UniqueName: \"kubernetes.io/projected/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd-kube-api-access-lmvq4\") pod \"kube-state-metrics-0\" (UID: \"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd\") " pod="openstack/kube-state-metrics-0" Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.921358 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvq4\" (UniqueName: \"kubernetes.io/projected/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd-kube-api-access-lmvq4\") pod \"kube-state-metrics-0\" (UID: \"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd\") " pod="openstack/kube-state-metrics-0" Jan 26 14:26:48 crc kubenswrapper[4922]: I0126 14:26:48.940194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvq4\" (UniqueName: \"kubernetes.io/projected/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd-kube-api-access-lmvq4\") pod \"kube-state-metrics-0\" (UID: \"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd\") " pod="openstack/kube-state-metrics-0" Jan 26 14:26:49 crc kubenswrapper[4922]: I0126 14:26:49.035663 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.043789 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.046283 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.050447 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.050677 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8jmw5" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.050851 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.050987 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.051268 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.051443 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.051572 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.063877 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.069157 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.141793 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142096 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142290 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142436 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6gw\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-kube-api-access-pv6gw\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142562 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142656 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142754 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.142923 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1bf34eda-49e0-412d-82b6-fe587116900f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.143034 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.143204 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.244154 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1bf34eda-49e0-412d-82b6-fe587116900f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.244201 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.244907 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.244952 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.244998 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.245022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.245464 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.245506 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6gw\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-kube-api-access-pv6gw\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.245528 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.245814 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.245835 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.247184 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.247481 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.249453 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.249450 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.250127 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.250151 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b3c4d564f6fc84458e9d6100c084ffc8a5ec4c60a4c8efc38ff6415485a8e6e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.250363 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.251970 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1bf34eda-49e0-412d-82b6-fe587116900f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.252476 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.262678 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6gw\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-kube-api-access-pv6gw\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.270852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:50 crc kubenswrapper[4922]: I0126 14:26:50.376898 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 14:26:51 crc kubenswrapper[4922]: I0126 14:26:51.957790 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x4rqw"] Jan 26 14:26:51 crc kubenswrapper[4922]: I0126 14:26:51.958716 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:51 crc kubenswrapper[4922]: I0126 14:26:51.961475 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 14:26:51 crc kubenswrapper[4922]: I0126 14:26:51.961797 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 14:26:51 crc kubenswrapper[4922]: I0126 14:26:51.961875 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zgkmm" Jan 26 14:26:51 crc kubenswrapper[4922]: I0126 14:26:51.971751 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw"] Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.024556 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fpgzk"] Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.026582 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.038389 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fpgzk"] Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.079731 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-etc-ovs\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.079778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-log\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.079799 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-run-ovn\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.079818 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-log-ovn\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.079851 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7088cbad-121a-40f6-9934-60a62f980b6d-scripts\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.079904 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-run\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2c2044-5422-40dc-92f5-051f1da6b2a2-combined-ca-bundle\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080043 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2c2044-5422-40dc-92f5-051f1da6b2a2-ovn-controller-tls-certs\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080084 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6k8g\" (UniqueName: \"kubernetes.io/projected/1a2c2044-5422-40dc-92f5-051f1da6b2a2-kube-api-access-h6k8g\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080120 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-lib\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080140 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-run\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080197 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a2c2044-5422-40dc-92f5-051f1da6b2a2-scripts\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.080240 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dw8r\" (UniqueName: \"kubernetes.io/projected/7088cbad-121a-40f6-9934-60a62f980b6d-kube-api-access-9dw8r\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182132 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dw8r\" (UniqueName: \"kubernetes.io/projected/7088cbad-121a-40f6-9934-60a62f980b6d-kube-api-access-9dw8r\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182209 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-etc-ovs\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182247 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-log\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182269 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-run-ovn\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182294 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-log-ovn\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182324 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7088cbad-121a-40f6-9934-60a62f980b6d-scripts\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182352 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-run\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182408 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2c2044-5422-40dc-92f5-051f1da6b2a2-combined-ca-bundle\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182433 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2c2044-5422-40dc-92f5-051f1da6b2a2-ovn-controller-tls-certs\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182459 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6k8g\" (UniqueName: \"kubernetes.io/projected/1a2c2044-5422-40dc-92f5-051f1da6b2a2-kube-api-access-h6k8g\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182483 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-lib\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182504 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-run\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.182540 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a2c2044-5422-40dc-92f5-051f1da6b2a2-scripts\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183278 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-etc-ovs\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183368 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-lib\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183427 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-run-ovn\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183456 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-run\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183575 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-log\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183606 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1a2c2044-5422-40dc-92f5-051f1da6b2a2-var-log-ovn\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.183640 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7088cbad-121a-40f6-9934-60a62f980b6d-var-run\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.184707 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a2c2044-5422-40dc-92f5-051f1da6b2a2-scripts\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.185156 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7088cbad-121a-40f6-9934-60a62f980b6d-scripts\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.196389 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2c2044-5422-40dc-92f5-051f1da6b2a2-ovn-controller-tls-certs\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.196641 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2c2044-5422-40dc-92f5-051f1da6b2a2-combined-ca-bundle\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.199274 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6k8g\" (UniqueName: \"kubernetes.io/projected/1a2c2044-5422-40dc-92f5-051f1da6b2a2-kube-api-access-h6k8g\") pod \"ovn-controller-x4rqw\" (UID: \"1a2c2044-5422-40dc-92f5-051f1da6b2a2\") " pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.200565 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dw8r\" (UniqueName: \"kubernetes.io/projected/7088cbad-121a-40f6-9934-60a62f980b6d-kube-api-access-9dw8r\") pod \"ovn-controller-ovs-fpgzk\" (UID: \"7088cbad-121a-40f6-9934-60a62f980b6d\") " pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.291576 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.355360 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.859121 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.860523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.863155 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.864751 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.865017 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.865232 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.865611 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-xl5w2" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.868679 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.995931 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.995982 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.996006 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f5cedc59-0829-41da-94bd-17137258865f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.996056 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5cedc59-0829-41da-94bd-17137258865f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.996098 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.996114 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvn8h\" (UniqueName: \"kubernetes.io/projected/f5cedc59-0829-41da-94bd-17137258865f-kube-api-access-kvn8h\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.996151 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:52 crc kubenswrapper[4922]: I0126 14:26:52.996176 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cedc59-0829-41da-94bd-17137258865f-config\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097719 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5cedc59-0829-41da-94bd-17137258865f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097782 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097809 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvn8h\" (UniqueName: \"kubernetes.io/projected/f5cedc59-0829-41da-94bd-17137258865f-kube-api-access-kvn8h\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097866 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097906 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cedc59-0829-41da-94bd-17137258865f-config\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097954 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.097985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.098011 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f5cedc59-0829-41da-94bd-17137258865f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.098594 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f5cedc59-0829-41da-94bd-17137258865f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.099659 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5cedc59-0829-41da-94bd-17137258865f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.099763 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cedc59-0829-41da-94bd-17137258865f-config\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.100656 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.102933 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.103228 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.116038 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvn8h\" (UniqueName: \"kubernetes.io/projected/f5cedc59-0829-41da-94bd-17137258865f-kube-api-access-kvn8h\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.117400 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5cedc59-0829-41da-94bd-17137258865f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.124099 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f5cedc59-0829-41da-94bd-17137258865f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:53 crc kubenswrapper[4922]: I0126 14:26:53.187077 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.460411 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.462354 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.465546 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.466035 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.466158 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pcvr2" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.466396 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.471005 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605587 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605641 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bc48070-5821-46c0-b06a-d50d64d22e19-config\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605737 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bc48070-5821-46c0-b06a-d50d64d22e19-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605784 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605846 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wj22\" (UniqueName: \"kubernetes.io/projected/6bc48070-5821-46c0-b06a-d50d64d22e19-kube-api-access-6wj22\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605884 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bc48070-5821-46c0-b06a-d50d64d22e19-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.605916 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.606000 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.707776 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bc48070-5821-46c0-b06a-d50d64d22e19-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.707842 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.707869 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wj22\" (UniqueName: \"kubernetes.io/projected/6bc48070-5821-46c0-b06a-d50d64d22e19-kube-api-access-6wj22\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.707895 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bc48070-5821-46c0-b06a-d50d64d22e19-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.707932 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.707971 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.708002 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.708021 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bc48070-5821-46c0-b06a-d50d64d22e19-config\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.708822 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.709489 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bc48070-5821-46c0-b06a-d50d64d22e19-config\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.709953 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bc48070-5821-46c0-b06a-d50d64d22e19-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.710435 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bc48070-5821-46c0-b06a-d50d64d22e19-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.716634 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.716747 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.719474 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc48070-5821-46c0-b06a-d50d64d22e19-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.735977 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wj22\" (UniqueName: \"kubernetes.io/projected/6bc48070-5821-46c0-b06a-d50d64d22e19-kube-api-access-6wj22\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.741843 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6bc48070-5821-46c0-b06a-d50d64d22e19\") " pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:56 crc kubenswrapper[4922]: I0126 14:26:56.798516 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.647543 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.647816 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.648049 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfvr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-654788c785-87hph_openstack(d7482af4-1f1d-495b-bace-425b679e778d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.649399 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-654788c785-87hph" podUID="d7482af4-1f1d-495b-bace-425b679e778d" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.666606 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.667160 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.667463 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js4hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7557798fb9-frpcn_openstack(b6060142-97a0-42ea-8c06-94648c7e9839): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:26:57 crc kubenswrapper[4922]: E0126 14:26:57.668818 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" podUID="b6060142-97a0-42ea-8c06-94648c7e9839" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.360607 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f57bd8fc-kqsz6"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.788204 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.788256 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.899440 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d9f4b769-kl9q7"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.909315 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.914288 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.920793 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.941498 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"205c6bf6-b838-4bea-9cf8-df9fe42bd53f","Type":"ContainerStarted","Data":"88e525068e74d268f67a6d740d6d5e6df15d2e9de3113d9accf822bdec3f8933"} Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.948045 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" event={"ID":"b6060142-97a0-42ea-8c06-94648c7e9839","Type":"ContainerDied","Data":"c495c53c24d2d13fa8825fedc197f6be916d1d1ea4826f3f2e5aca5bc8ab6cc9"} Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.948153 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7557798fb9-frpcn" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.948915 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.948947 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfvr5\" (UniqueName: \"kubernetes.io/projected/d7482af4-1f1d-495b-bace-425b679e778d-kube-api-access-xfvr5\") pod \"d7482af4-1f1d-495b-bace-425b679e778d\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.948998 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-dns-svc\") pod \"b6060142-97a0-42ea-8c06-94648c7e9839\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.949084 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js4hn\" (UniqueName: \"kubernetes.io/projected/b6060142-97a0-42ea-8c06-94648c7e9839-kube-api-access-js4hn\") pod \"b6060142-97a0-42ea-8c06-94648c7e9839\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.949143 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-config\") pod \"b6060142-97a0-42ea-8c06-94648c7e9839\" (UID: \"b6060142-97a0-42ea-8c06-94648c7e9839\") " Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.949275 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7482af4-1f1d-495b-bace-425b679e778d-config\") pod \"d7482af4-1f1d-495b-bace-425b679e778d\" (UID: \"d7482af4-1f1d-495b-bace-425b679e778d\") " Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.949693 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6060142-97a0-42ea-8c06-94648c7e9839" (UID: "b6060142-97a0-42ea-8c06-94648c7e9839"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.949928 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7482af4-1f1d-495b-bace-425b679e778d-config" (OuterVolumeSpecName: "config") pod "d7482af4-1f1d-495b-bace-425b679e778d" (UID: "d7482af4-1f1d-495b-bace-425b679e778d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.950368 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-config" (OuterVolumeSpecName: "config") pod "b6060142-97a0-42ea-8c06-94648c7e9839" (UID: "b6060142-97a0-42ea-8c06-94648c7e9839"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.953194 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7482af4-1f1d-495b-bace-425b679e778d-kube-api-access-xfvr5" (OuterVolumeSpecName: "kube-api-access-xfvr5") pod "d7482af4-1f1d-495b-bace-425b679e778d" (UID: "d7482af4-1f1d-495b-bace-425b679e778d"). InnerVolumeSpecName "kube-api-access-xfvr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.953978 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6060142-97a0-42ea-8c06-94648c7e9839-kube-api-access-js4hn" (OuterVolumeSpecName: "kube-api-access-js4hn") pod "b6060142-97a0-42ea-8c06-94648c7e9839" (UID: "b6060142-97a0-42ea-8c06-94648c7e9839"). InnerVolumeSpecName "kube-api-access-js4hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.954830 4922 generic.go:334] "Generic (PLEG): container finished" podID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerID="017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717" exitCode=0 Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.958257 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.958297 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" event={"ID":"b82db84b-2c45-49a4-bb32-3b749a9373fc","Type":"ContainerDied","Data":"017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717"} Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.958318 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" event={"ID":"b82db84b-2c45-49a4-bb32-3b749a9373fc","Type":"ContainerStarted","Data":"ab23e6ead2d7a2f72f41b0705168579b614a9e2c434ab91720b73f4ae1da61d0"} Jan 26 14:26:58 crc kubenswrapper[4922]: W0126 14:26:58.960695 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2c2044_5422_40dc_92f5_051f1da6b2a2.slice/crio-090195d7f795d75f459513e522d6c2048997c42305342d2403ef27de50a61ada WatchSource:0}: Error finding container 090195d7f795d75f459513e522d6c2048997c42305342d2403ef27de50a61ada: Status 404 returned error can't find the container with id 090195d7f795d75f459513e522d6c2048997c42305342d2403ef27de50a61ada Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.964253 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654788c785-87hph" event={"ID":"d7482af4-1f1d-495b-bace-425b679e778d","Type":"ContainerDied","Data":"30e70bfd9982b35fa47c5ba45fb6feb67105679b04d5486e58bb027a538aa7f7"} Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.964316 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654788c785-87hph" Jan 26 14:26:58 crc kubenswrapper[4922]: I0126 14:26:58.964583 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:26:58 crc kubenswrapper[4922]: W0126 14:26:58.968915 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf34eda_49e0_412d_82b6_fe587116900f.slice/crio-021f480b92f23db65089625144fbee4f51e2fd76a31042a82788d0bc2fdec6e5 WatchSource:0}: Error finding container 021f480b92f23db65089625144fbee4f51e2fd76a31042a82788d0bc2fdec6e5: Status 404 returned error can't find the container with id 021f480b92f23db65089625144fbee4f51e2fd76a31042a82788d0bc2fdec6e5 Jan 26 14:26:58 crc kubenswrapper[4922]: W0126 14:26:58.969258 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb3b1de_0efe_4de9_9e48_6f2f6885c197.slice/crio-45f800bb884ed61ab4c0486c9e1026ca721e010ab474df313c52eb0805cc419b WatchSource:0}: Error finding container 45f800bb884ed61ab4c0486c9e1026ca721e010ab474df313c52eb0805cc419b: Status 404 returned error can't find the container with id 45f800bb884ed61ab4c0486c9e1026ca721e010ab474df313c52eb0805cc419b Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.061788 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfvr5\" (UniqueName: \"kubernetes.io/projected/d7482af4-1f1d-495b-bace-425b679e778d-kube-api-access-xfvr5\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.061825 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.061842 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js4hn\" (UniqueName: \"kubernetes.io/projected/b6060142-97a0-42ea-8c06-94648c7e9839-kube-api-access-js4hn\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.061853 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6060142-97a0-42ea-8c06-94648c7e9839-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.061864 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7482af4-1f1d-495b-bace-425b679e778d-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.063619 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.068646 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.077899 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.085101 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 14:26:59 crc kubenswrapper[4922]: W0126 14:26:59.091255 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b0a9cff_a23b_4c41_ac95_97e2b3532cc0.slice/crio-f6ac1786adcb3ed7e825324dc80cf67ab7cfc03c5f8f5ebacdf136d0bff8707e WatchSource:0}: Error finding container f6ac1786adcb3ed7e825324dc80cf67ab7cfc03c5f8f5ebacdf136d0bff8707e: Status 404 returned error can't find the container with id f6ac1786adcb3ed7e825324dc80cf67ab7cfc03c5f8f5ebacdf136d0bff8707e Jan 26 14:26:59 crc kubenswrapper[4922]: W0126 14:26:59.127651 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3ea763a_f09f_435f_b75d_69e3b9160943.slice/crio-9e48dcc7a919f738b7cc5f7e2e6d5dc6af1f476b92624cf7d3aa457499f98215 WatchSource:0}: Error finding container 9e48dcc7a919f738b7cc5f7e2e6d5dc6af1f476b92624cf7d3aa457499f98215: Status 404 returned error can't find the container with id 9e48dcc7a919f738b7cc5f7e2e6d5dc6af1f476b92624cf7d3aa457499f98215 Jan 26 14:26:59 crc kubenswrapper[4922]: W0126 14:26:59.134191 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef85b6d3_6e1d_4a96_9a93_19ab9618c3cd.slice/crio-1c8df4bd8a3b6178e44d0529145aeb7be9c6660f766ed60ee2392f89636271f5 WatchSource:0}: Error finding container 1c8df4bd8a3b6178e44d0529145aeb7be9c6660f766ed60ee2392f89636271f5: Status 404 returned error can't find the container with id 1c8df4bd8a3b6178e44d0529145aeb7be9c6660f766ed60ee2392f89636271f5 Jan 26 14:26:59 crc kubenswrapper[4922]: W0126 14:26:59.191852 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5cedc59_0829_41da_94bd_17137258865f.slice/crio-7102aeaa9ec1a428d0532f6aeb0600bb96fc57e15284cc2681e520023e7e0b4b WatchSource:0}: Error finding container 7102aeaa9ec1a428d0532f6aeb0600bb96fc57e15284cc2681e520023e7e0b4b: Status 404 returned error can't find the container with id 7102aeaa9ec1a428d0532f6aeb0600bb96fc57e15284cc2681e520023e7e0b4b Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.201262 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-654788c785-87hph"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.207870 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-654788c785-87hph"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.212431 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.290115 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7557798fb9-frpcn"] Jan 26 14:26:59 crc kubenswrapper[4922]: E0126 14:26:59.291318 4922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 14:26:59 crc kubenswrapper[4922]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b82db84b-2c45-49a4-bb32-3b749a9373fc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 14:26:59 crc kubenswrapper[4922]: > podSandboxID="ab23e6ead2d7a2f72f41b0705168579b614a9e2c434ab91720b73f4ae1da61d0" Jan 26 14:26:59 crc kubenswrapper[4922]: E0126 14:26:59.291492 4922 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 14:26:59 crc kubenswrapper[4922]: container &Container{Name:dnsmasq-dns,Image:38.102.83.230:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv87g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-9f57bd8fc-kqsz6_openstack(b82db84b-2c45-49a4-bb32-3b749a9373fc): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b82db84b-2c45-49a4-bb32-3b749a9373fc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 14:26:59 crc kubenswrapper[4922]: > logger="UnhandledError" Jan 26 14:26:59 crc kubenswrapper[4922]: E0126 14:26:59.292651 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b82db84b-2c45-49a4-bb32-3b749a9373fc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.296003 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7557798fb9-frpcn"] Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.301608 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fpgzk"] Jan 26 14:26:59 crc kubenswrapper[4922]: W0126 14:26:59.315334 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7088cbad_121a_40f6_9934_60a62f980b6d.slice/crio-051b5fb9d0269fc8961132fe153b1217fbe6c48ab57c39ecf47a16c2e05e8f9b WatchSource:0}: Error finding container 051b5fb9d0269fc8961132fe153b1217fbe6c48ab57c39ecf47a16c2e05e8f9b: Status 404 returned error can't find the container with id 051b5fb9d0269fc8961132fe153b1217fbe6c48ab57c39ecf47a16c2e05e8f9b Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.974359 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw" event={"ID":"1a2c2044-5422-40dc-92f5-051f1da6b2a2","Type":"ContainerStarted","Data":"090195d7f795d75f459513e522d6c2048997c42305342d2403ef27de50a61ada"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.975616 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e3ea763a-f09f-435f-b75d-69e3b9160943","Type":"ContainerStarted","Data":"9e48dcc7a919f738b7cc5f7e2e6d5dc6af1f476b92624cf7d3aa457499f98215"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.976725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6bc48070-5821-46c0-b06a-d50d64d22e19","Type":"ContainerStarted","Data":"8d80c8047260de3341bde996c48ee99d2b6cda9494d5c6a8e1c1423a7f8b66cf"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.980525 4922 generic.go:334] "Generic (PLEG): container finished" podID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerID="d3f5ce1dd97d5276eae2cf0ae3d85d513f085f3e7db4ae183d0cc14b4005f013" exitCode=0 Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.980587 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" event={"ID":"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce","Type":"ContainerDied","Data":"d3f5ce1dd97d5276eae2cf0ae3d85d513f085f3e7db4ae183d0cc14b4005f013"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.980611 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" event={"ID":"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce","Type":"ContainerStarted","Data":"1d72db20e70aa6f1db58a9a03f3a6a1d15134723af3ae83ef2c6dbdc0d572a72"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.981955 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd","Type":"ContainerStarted","Data":"1c8df4bd8a3b6178e44d0529145aeb7be9c6660f766ed60ee2392f89636271f5"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.984520 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fpgzk" event={"ID":"7088cbad-121a-40f6-9934-60a62f980b6d","Type":"ContainerStarted","Data":"051b5fb9d0269fc8961132fe153b1217fbe6c48ab57c39ecf47a16c2e05e8f9b"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.986203 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"1881b31a-fd0f-40c8-a098-10888cec43db","Type":"ContainerStarted","Data":"3abacea0c0ed05e7fbdd7fec504564cc863c871e1459bdaffd2c6bc5fa912f8a"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.988865 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerStarted","Data":"021f480b92f23db65089625144fbee4f51e2fd76a31042a82788d0bc2fdec6e5"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.992263 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cb3b1de-0efe-4de9-9e48-6f2f6885c197","Type":"ContainerStarted","Data":"45f800bb884ed61ab4c0486c9e1026ca721e010ab474df313c52eb0805cc419b"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.993744 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f5cedc59-0829-41da-94bd-17137258865f","Type":"ContainerStarted","Data":"7102aeaa9ec1a428d0532f6aeb0600bb96fc57e15284cc2681e520023e7e0b4b"} Jan 26 14:26:59 crc kubenswrapper[4922]: I0126 14:26:59.995438 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729a7732-744d-4ef7-b2c5-054f0f5f7f79","Type":"ContainerStarted","Data":"5119e7f3860b23261e3b5f960ec006a06f6f8bbcb33d3c9616731439beb7a4f5"} Jan 26 14:27:00 crc kubenswrapper[4922]: I0126 14:27:00.000150 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0","Type":"ContainerStarted","Data":"f6ac1786adcb3ed7e825324dc80cf67ab7cfc03c5f8f5ebacdf136d0bff8707e"} Jan 26 14:27:01 crc kubenswrapper[4922]: I0126 14:27:01.103611 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6060142-97a0-42ea-8c06-94648c7e9839" path="/var/lib/kubelet/pods/b6060142-97a0-42ea-8c06-94648c7e9839/volumes" Jan 26 14:27:01 crc kubenswrapper[4922]: I0126 14:27:01.104862 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7482af4-1f1d-495b-bace-425b679e778d" path="/var/lib/kubelet/pods/d7482af4-1f1d-495b-bace-425b679e778d/volumes" Jan 26 14:27:07 crc kubenswrapper[4922]: I0126 14:27:07.065947 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" event={"ID":"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce","Type":"ContainerStarted","Data":"e6dc51783a8b9a3a9bfcfc9909e1c034f64e1ac9a65d1efff6ff6d991246b371"} Jan 26 14:27:07 crc kubenswrapper[4922]: I0126 14:27:07.067387 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:27:07 crc kubenswrapper[4922]: I0126 14:27:07.081868 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" event={"ID":"b82db84b-2c45-49a4-bb32-3b749a9373fc","Type":"ContainerStarted","Data":"b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b"} Jan 26 14:27:07 crc kubenswrapper[4922]: I0126 14:27:07.082124 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:27:07 crc kubenswrapper[4922]: I0126 14:27:07.091578 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" podStartSLOduration=25.091558179 podStartE2EDuration="25.091558179s" podCreationTimestamp="2026-01-26 14:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:07.086699631 +0000 UTC m=+1044.288962433" watchObservedRunningTime="2026-01-26 14:27:07.091558179 +0000 UTC m=+1044.293820951" Jan 26 14:27:07 crc kubenswrapper[4922]: I0126 14:27:07.112107 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" podStartSLOduration=25.060545361 podStartE2EDuration="25.11208668s" podCreationTimestamp="2026-01-26 14:26:42 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.37651707 +0000 UTC m=+1035.578779842" lastFinishedPulling="2026-01-26 14:26:58.428058389 +0000 UTC m=+1035.630321161" observedRunningTime="2026-01-26 14:27:07.107368515 +0000 UTC m=+1044.309631287" watchObservedRunningTime="2026-01-26 14:27:07.11208668 +0000 UTC m=+1044.314349452" Jan 26 14:27:08 crc kubenswrapper[4922]: I0126 14:27:08.097902 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"205c6bf6-b838-4bea-9cf8-df9fe42bd53f","Type":"ContainerStarted","Data":"4a6d26ea56684019c6777012f42f8379c242b6c97a1921f8d568806211d3f0a8"} Jan 26 14:27:08 crc kubenswrapper[4922]: I0126 14:27:08.100709 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cb3b1de-0efe-4de9-9e48-6f2f6885c197","Type":"ContainerStarted","Data":"d052037a96e682cdf34b64ff1c5d623ab9bcf22c41e88bf04083e665faec917f"} Jan 26 14:27:08 crc kubenswrapper[4922]: I0126 14:27:08.101335 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 14:27:08 crc kubenswrapper[4922]: I0126 14:27:08.147975 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.213532712 podStartE2EDuration="22.147959431s" podCreationTimestamp="2026-01-26 14:26:46 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.99987224 +0000 UTC m=+1036.202135012" lastFinishedPulling="2026-01-26 14:27:05.934298949 +0000 UTC m=+1043.136561731" observedRunningTime="2026-01-26 14:27:08.141596453 +0000 UTC m=+1045.343859225" watchObservedRunningTime="2026-01-26 14:27:08.147959431 +0000 UTC m=+1045.350222203" Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.111468 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729a7732-744d-4ef7-b2c5-054f0f5f7f79","Type":"ContainerStarted","Data":"654924e03a4b9f9627104adb2bb4685269628823fa15e381d3fc817cd4972458"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.118859 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw" event={"ID":"1a2c2044-5422-40dc-92f5-051f1da6b2a2","Type":"ContainerStarted","Data":"90b7c24d59b64b2e2a2341464bbce82fdf09c5a8fe70412324f9c0e50e2d325b"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.118991 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-x4rqw" Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.120575 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e3ea763a-f09f-435f-b75d-69e3b9160943","Type":"ContainerStarted","Data":"e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.122277 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6bc48070-5821-46c0-b06a-d50d64d22e19","Type":"ContainerStarted","Data":"e238fd18d1efd3e552a8b2204627263ca785d69c4bb9e7314703dd87646262b9"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.123846 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd","Type":"ContainerStarted","Data":"769b53b07a50207a25307b7f5203882f67548fea70fc64521a6748241808602a"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.124298 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.125709 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fpgzk" event={"ID":"7088cbad-121a-40f6-9934-60a62f980b6d","Type":"ContainerStarted","Data":"1b799cf2ef25bb21dd32dd2508f7f9c4a7b5b0329059cc8f4a6a64aaa214d39a"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.131360 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f5cedc59-0829-41da-94bd-17137258865f","Type":"ContainerStarted","Data":"d2b2a1e55d67b85838eeacc3fd11a73dbc4a54828f5b9d269f066645505fc04d"} Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.147345 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x4rqw" podStartSLOduration=10.908384557 podStartE2EDuration="18.147328827s" podCreationTimestamp="2026-01-26 14:26:51 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.964686062 +0000 UTC m=+1036.166948834" lastFinishedPulling="2026-01-26 14:27:06.203630332 +0000 UTC m=+1043.405893104" observedRunningTime="2026-01-26 14:27:09.145531459 +0000 UTC m=+1046.347794231" watchObservedRunningTime="2026-01-26 14:27:09.147328827 +0000 UTC m=+1046.349591599" Jan 26 14:27:09 crc kubenswrapper[4922]: I0126 14:27:09.201091 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.911273662 podStartE2EDuration="21.201059024s" podCreationTimestamp="2026-01-26 14:26:48 +0000 UTC" firstStartedPulling="2026-01-26 14:26:59.136966095 +0000 UTC m=+1036.339228867" lastFinishedPulling="2026-01-26 14:27:07.426751417 +0000 UTC m=+1044.629014229" observedRunningTime="2026-01-26 14:27:09.19977539 +0000 UTC m=+1046.402038162" watchObservedRunningTime="2026-01-26 14:27:09.201059024 +0000 UTC m=+1046.403321796" Jan 26 14:27:10 crc kubenswrapper[4922]: I0126 14:27:10.141403 4922 generic.go:334] "Generic (PLEG): container finished" podID="7088cbad-121a-40f6-9934-60a62f980b6d" containerID="1b799cf2ef25bb21dd32dd2508f7f9c4a7b5b0329059cc8f4a6a64aaa214d39a" exitCode=0 Jan 26 14:27:10 crc kubenswrapper[4922]: I0126 14:27:10.141475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fpgzk" event={"ID":"7088cbad-121a-40f6-9934-60a62f980b6d","Type":"ContainerDied","Data":"1b799cf2ef25bb21dd32dd2508f7f9c4a7b5b0329059cc8f4a6a64aaa214d39a"} Jan 26 14:27:10 crc kubenswrapper[4922]: I0126 14:27:10.143853 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"1881b31a-fd0f-40c8-a098-10888cec43db","Type":"ContainerStarted","Data":"d264fb210e4eb6f10e19d40617b25321efcdd1122070fdff3b7a7d19f57ebfef"} Jan 26 14:27:10 crc kubenswrapper[4922]: I0126 14:27:10.149479 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0","Type":"ContainerStarted","Data":"2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754"} Jan 26 14:27:11 crc kubenswrapper[4922]: I0126 14:27:11.158511 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerStarted","Data":"db29138d2553985c8a0ac700ae413106b67e265d80edda513a17bed8b28ad52e"} Jan 26 14:27:12 crc kubenswrapper[4922]: I0126 14:27:12.299496 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 14:27:12 crc kubenswrapper[4922]: I0126 14:27:12.405213 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:27:12 crc kubenswrapper[4922]: I0126 14:27:12.979408 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.044282 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f57bd8fc-kqsz6"] Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.187996 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fpgzk" event={"ID":"7088cbad-121a-40f6-9934-60a62f980b6d","Type":"ContainerStarted","Data":"328b801bbfe35d48f2a97b1e96374b3f4af73554cde2e563143f179edb0996df"} Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.189964 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f5cedc59-0829-41da-94bd-17137258865f","Type":"ContainerStarted","Data":"6037ef3675310148d183253cc2fcb0b512b4c2043235e82b04599f86f229ff24"} Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.192406 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerName="dnsmasq-dns" containerID="cri-o://b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b" gracePeriod=10 Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.192799 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6bc48070-5821-46c0-b06a-d50d64d22e19","Type":"ContainerStarted","Data":"3e0366d22bd8f101a5ff7e21a15dca02b8f4b0441229cb8b1b54175782257d96"} Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.248905 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.427203732 podStartE2EDuration="18.248888116s" podCreationTimestamp="2026-01-26 14:26:55 +0000 UTC" firstStartedPulling="2026-01-26 14:26:59.141153236 +0000 UTC m=+1036.343416008" lastFinishedPulling="2026-01-26 14:27:11.96283762 +0000 UTC m=+1049.165100392" observedRunningTime="2026-01-26 14:27:13.236402006 +0000 UTC m=+1050.438664768" watchObservedRunningTime="2026-01-26 14:27:13.248888116 +0000 UTC m=+1050.451150888" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.252291 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.469381575 podStartE2EDuration="22.252275415s" podCreationTimestamp="2026-01-26 14:26:51 +0000 UTC" firstStartedPulling="2026-01-26 14:26:59.196107095 +0000 UTC m=+1036.398369867" lastFinishedPulling="2026-01-26 14:27:11.979000935 +0000 UTC m=+1049.181263707" observedRunningTime="2026-01-26 14:27:13.215143576 +0000 UTC m=+1050.417406378" watchObservedRunningTime="2026-01-26 14:27:13.252275415 +0000 UTC m=+1050.454538187" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.694662 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.798344 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-config\") pod \"b82db84b-2c45-49a4-bb32-3b749a9373fc\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.798565 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv87g\" (UniqueName: \"kubernetes.io/projected/b82db84b-2c45-49a4-bb32-3b749a9373fc-kube-api-access-sv87g\") pod \"b82db84b-2c45-49a4-bb32-3b749a9373fc\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.798626 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-dns-svc\") pod \"b82db84b-2c45-49a4-bb32-3b749a9373fc\" (UID: \"b82db84b-2c45-49a4-bb32-3b749a9373fc\") " Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.822018 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82db84b-2c45-49a4-bb32-3b749a9373fc-kube-api-access-sv87g" (OuterVolumeSpecName: "kube-api-access-sv87g") pod "b82db84b-2c45-49a4-bb32-3b749a9373fc" (UID: "b82db84b-2c45-49a4-bb32-3b749a9373fc"). InnerVolumeSpecName "kube-api-access-sv87g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.847990 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b82db84b-2c45-49a4-bb32-3b749a9373fc" (UID: "b82db84b-2c45-49a4-bb32-3b749a9373fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.849086 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-config" (OuterVolumeSpecName: "config") pod "b82db84b-2c45-49a4-bb32-3b749a9373fc" (UID: "b82db84b-2c45-49a4-bb32-3b749a9373fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.900632 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv87g\" (UniqueName: \"kubernetes.io/projected/b82db84b-2c45-49a4-bb32-3b749a9373fc-kube-api-access-sv87g\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.900665 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:13 crc kubenswrapper[4922]: I0126 14:27:13.900694 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82db84b-2c45-49a4-bb32-3b749a9373fc-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.188926 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.203386 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fpgzk" event={"ID":"7088cbad-121a-40f6-9934-60a62f980b6d","Type":"ContainerStarted","Data":"46e236bf988ad2905c403f846c394780316990a0df37057b672f3585d9428245"} Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.203565 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.205747 4922 generic.go:334] "Generic (PLEG): container finished" podID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerID="b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b" exitCode=0 Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.205862 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.207120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" event={"ID":"b82db84b-2c45-49a4-bb32-3b749a9373fc","Type":"ContainerDied","Data":"b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b"} Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.207164 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f57bd8fc-kqsz6" event={"ID":"b82db84b-2c45-49a4-bb32-3b749a9373fc","Type":"ContainerDied","Data":"ab23e6ead2d7a2f72f41b0705168579b614a9e2c434ab91720b73f4ae1da61d0"} Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.207186 4922 scope.go:117] "RemoveContainer" containerID="b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.231567 4922 scope.go:117] "RemoveContainer" containerID="017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.235614 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.238758 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fpgzk" podStartSLOduration=15.353754715 podStartE2EDuration="22.23872083s" podCreationTimestamp="2026-01-26 14:26:52 +0000 UTC" firstStartedPulling="2026-01-26 14:26:59.318661987 +0000 UTC m=+1036.520924749" lastFinishedPulling="2026-01-26 14:27:06.203628092 +0000 UTC m=+1043.405890864" observedRunningTime="2026-01-26 14:27:14.225674336 +0000 UTC m=+1051.427937108" watchObservedRunningTime="2026-01-26 14:27:14.23872083 +0000 UTC m=+1051.440983602" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.253574 4922 scope.go:117] "RemoveContainer" containerID="b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b" Jan 26 14:27:14 crc kubenswrapper[4922]: E0126 14:27:14.254047 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b\": container with ID starting with b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b not found: ID does not exist" containerID="b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.254107 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b"} err="failed to get container status \"b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b\": rpc error: code = NotFound desc = could not find container \"b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b\": container with ID starting with b5d85260bb460f205784b4b0522b0dd7aabd8bf4e0a844f7cc26a8a506cd931b not found: ID does not exist" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.254137 4922 scope.go:117] "RemoveContainer" containerID="017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717" Jan 26 14:27:14 crc kubenswrapper[4922]: E0126 14:27:14.254531 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717\": container with ID starting with 017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717 not found: ID does not exist" containerID="017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.254565 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717"} err="failed to get container status \"017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717\": rpc error: code = NotFound desc = could not find container \"017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717\": container with ID starting with 017d56c29d9febd9624cb9c6920260be8874e54f8a0c58dc4e72d5164656f717 not found: ID does not exist" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.279108 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f57bd8fc-kqsz6"] Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.284279 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f57bd8fc-kqsz6"] Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.798699 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 14:27:14 crc kubenswrapper[4922]: I0126 14:27:14.847877 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.102482 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" path="/var/lib/kubelet/pods/b82db84b-2c45-49a4-bb32-3b749a9373fc/volumes" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.216449 4922 generic.go:334] "Generic (PLEG): container finished" podID="205c6bf6-b838-4bea-9cf8-df9fe42bd53f" containerID="4a6d26ea56684019c6777012f42f8379c242b6c97a1921f8d568806211d3f0a8" exitCode=0 Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.216535 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"205c6bf6-b838-4bea-9cf8-df9fe42bd53f","Type":"ContainerDied","Data":"4a6d26ea56684019c6777012f42f8379c242b6c97a1921f8d568806211d3f0a8"} Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.220028 4922 generic.go:334] "Generic (PLEG): container finished" podID="729a7732-744d-4ef7-b2c5-054f0f5f7f79" containerID="654924e03a4b9f9627104adb2bb4685269628823fa15e381d3fc817cd4972458" exitCode=0 Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.221599 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729a7732-744d-4ef7-b2c5-054f0f5f7f79","Type":"ContainerDied","Data":"654924e03a4b9f9627104adb2bb4685269628823fa15e381d3fc817cd4972458"} Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.223231 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.223392 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.224873 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.285953 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.293334 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.644563 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6db78fcf6f-bxfkc"] Jan 26 14:27:15 crc kubenswrapper[4922]: E0126 14:27:15.645263 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerName="init" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.645285 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerName="init" Jan 26 14:27:15 crc kubenswrapper[4922]: E0126 14:27:15.645336 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerName="dnsmasq-dns" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.645345 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerName="dnsmasq-dns" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.645541 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82db84b-2c45-49a4-bb32-3b749a9373fc" containerName="dnsmasq-dns" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.646560 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.648847 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.661181 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6db78fcf6f-bxfkc"] Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.736498 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-ovsdbserver-sb\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.736583 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-dns-svc\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.736605 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-config\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.736689 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4ms\" (UniqueName: \"kubernetes.io/projected/30704a6a-68d7-41ad-9ca5-aa8f3468699b-kube-api-access-dd4ms\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.838382 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4ms\" (UniqueName: \"kubernetes.io/projected/30704a6a-68d7-41ad-9ca5-aa8f3468699b-kube-api-access-dd4ms\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.838500 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-ovsdbserver-sb\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.838574 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-dns-svc\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.838601 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-config\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.839664 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-config\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.840654 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-ovsdbserver-sb\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.841315 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-dns-svc\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.866148 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6db78fcf6f-bxfkc"] Jan 26 14:27:15 crc kubenswrapper[4922]: E0126 14:27:15.866786 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-dd4ms], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" podUID="30704a6a-68d7-41ad-9ca5-aa8f3468699b" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.868824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4ms\" (UniqueName: \"kubernetes.io/projected/30704a6a-68d7-41ad-9ca5-aa8f3468699b-kube-api-access-dd4ms\") pod \"dnsmasq-dns-6db78fcf6f-bxfkc\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.902423 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2nwnw"] Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.904370 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.911751 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2nwnw"] Jan 26 14:27:15 crc kubenswrapper[4922]: I0126 14:27:15.965850 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.025275 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bdc6cbc7-b67pp"] Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.029253 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.031638 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.046037 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bdc6cbc7-b67pp"] Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.059919 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6d76ee55-7df1-42fb-817b-031f44d36f82-ovn-rundir\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.060276 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgj74\" (UniqueName: \"kubernetes.io/projected/6d76ee55-7df1-42fb-817b-031f44d36f82-kube-api-access-sgj74\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.060370 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d76ee55-7df1-42fb-817b-031f44d36f82-config\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.060474 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76ee55-7df1-42fb-817b-031f44d36f82-combined-ca-bundle\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.060543 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6d76ee55-7df1-42fb-817b-031f44d36f82-ovs-rundir\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.060611 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d76ee55-7df1-42fb-817b-031f44d36f82-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.081133 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.082867 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.084097 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.131204 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.131327 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.131335 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2mz4j" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.131370 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.163841 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d76ee55-7df1-42fb-817b-031f44d36f82-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.164177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6d76ee55-7df1-42fb-817b-031f44d36f82-ovn-rundir\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.164310 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-nb\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.164444 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgj74\" (UniqueName: \"kubernetes.io/projected/6d76ee55-7df1-42fb-817b-031f44d36f82-kube-api-access-sgj74\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.164548 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-sb\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.164686 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d76ee55-7df1-42fb-817b-031f44d36f82-config\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.165856 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-dns-svc\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.166021 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-config\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.166159 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdbd5\" (UniqueName: \"kubernetes.io/projected/c79c0339-860e-4665-87fe-bc34b6f31229-kube-api-access-gdbd5\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.166279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76ee55-7df1-42fb-817b-031f44d36f82-combined-ca-bundle\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.166376 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6d76ee55-7df1-42fb-817b-031f44d36f82-ovs-rundir\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.166544 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6d76ee55-7df1-42fb-817b-031f44d36f82-ovs-rundir\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.165007 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6d76ee55-7df1-42fb-817b-031f44d36f82-ovn-rundir\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.165532 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d76ee55-7df1-42fb-817b-031f44d36f82-config\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.168332 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d76ee55-7df1-42fb-817b-031f44d36f82-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.173577 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76ee55-7df1-42fb-817b-031f44d36f82-combined-ca-bundle\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.186621 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgj74\" (UniqueName: \"kubernetes.io/projected/6d76ee55-7df1-42fb-817b-031f44d36f82-kube-api-access-sgj74\") pod \"ovn-controller-metrics-2nwnw\" (UID: \"6d76ee55-7df1-42fb-817b-031f44d36f82\") " pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.228659 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"205c6bf6-b838-4bea-9cf8-df9fe42bd53f","Type":"ContainerStarted","Data":"38646d5de88b7ff72fbac4fb3c6cf106a108a5549aaeefd7c8aa5699637bae00"} Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.234253 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729a7732-744d-4ef7-b2c5-054f0f5f7f79","Type":"ContainerStarted","Data":"0f8615ad77c66f30087561751fae269880a6bd26c22f924f43bbc3ec8ee24873"} Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.234542 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.247442 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.952162689 podStartE2EDuration="32.247424145s" podCreationTimestamp="2026-01-26 14:26:44 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.91040452 +0000 UTC m=+1036.112667292" lastFinishedPulling="2026-01-26 14:27:06.205665976 +0000 UTC m=+1043.407928748" observedRunningTime="2026-01-26 14:27:16.244768775 +0000 UTC m=+1053.447031547" watchObservedRunningTime="2026-01-26 14:27:16.247424145 +0000 UTC m=+1053.449686917" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.267889 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268316 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-config\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdbd5\" (UniqueName: \"kubernetes.io/projected/c79c0339-860e-4665-87fe-bc34b6f31229-kube-api-access-gdbd5\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268490 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44db7ec1-3a40-46de-b048-94191897a988-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268569 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgb67\" (UniqueName: \"kubernetes.io/projected/44db7ec1-3a40-46de-b048-94191897a988-kube-api-access-lgb67\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268693 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44db7ec1-3a40-46de-b048-94191897a988-scripts\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268723 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268747 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44db7ec1-3a40-46de-b048-94191897a988-config\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268800 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-nb\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268891 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-sb\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.268948 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-dns-svc\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.269473 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-config\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.269699 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-sb\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.269854 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-dns-svc\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.270012 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-nb\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.279787 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.02338407 podStartE2EDuration="31.279765579s" podCreationTimestamp="2026-01-26 14:26:45 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.947649213 +0000 UTC m=+1036.149911985" lastFinishedPulling="2026-01-26 14:27:06.204030722 +0000 UTC m=+1043.406293494" observedRunningTime="2026-01-26 14:27:16.273564894 +0000 UTC m=+1053.475827666" watchObservedRunningTime="2026-01-26 14:27:16.279765579 +0000 UTC m=+1053.482028351" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.287248 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdbd5\" (UniqueName: \"kubernetes.io/projected/c79c0339-860e-4665-87fe-bc34b6f31229-kube-api-access-gdbd5\") pod \"dnsmasq-dns-84bdc6cbc7-b67pp\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.292456 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2nwnw" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.358414 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.374352 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-dns-svc\") pod \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.374404 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-config\") pod \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.374428 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-ovsdbserver-sb\") pod \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.374475 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd4ms\" (UniqueName: \"kubernetes.io/projected/30704a6a-68d7-41ad-9ca5-aa8f3468699b-kube-api-access-dd4ms\") pod \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\" (UID: \"30704a6a-68d7-41ad-9ca5-aa8f3468699b\") " Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.374715 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44db7ec1-3a40-46de-b048-94191897a988-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.374799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgb67\" (UniqueName: \"kubernetes.io/projected/44db7ec1-3a40-46de-b048-94191897a988-kube-api-access-lgb67\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.375029 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.375046 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44db7ec1-3a40-46de-b048-94191897a988-scripts\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.375077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.375110 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44db7ec1-3a40-46de-b048-94191897a988-config\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.375177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.376814 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44db7ec1-3a40-46de-b048-94191897a988-scripts\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.377557 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-config" (OuterVolumeSpecName: "config") pod "30704a6a-68d7-41ad-9ca5-aa8f3468699b" (UID: "30704a6a-68d7-41ad-9ca5-aa8f3468699b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.377880 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30704a6a-68d7-41ad-9ca5-aa8f3468699b" (UID: "30704a6a-68d7-41ad-9ca5-aa8f3468699b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.378323 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30704a6a-68d7-41ad-9ca5-aa8f3468699b" (UID: "30704a6a-68d7-41ad-9ca5-aa8f3468699b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.379578 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44db7ec1-3a40-46de-b048-94191897a988-config\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.380686 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44db7ec1-3a40-46de-b048-94191897a988-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.389821 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.396837 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.397867 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30704a6a-68d7-41ad-9ca5-aa8f3468699b-kube-api-access-dd4ms" (OuterVolumeSpecName: "kube-api-access-dd4ms") pod "30704a6a-68d7-41ad-9ca5-aa8f3468699b" (UID: "30704a6a-68d7-41ad-9ca5-aa8f3468699b"). InnerVolumeSpecName "kube-api-access-dd4ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.410878 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44db7ec1-3a40-46de-b048-94191897a988-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.414269 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgb67\" (UniqueName: \"kubernetes.io/projected/44db7ec1-3a40-46de-b048-94191897a988-kube-api-access-lgb67\") pod \"ovn-northd-0\" (UID: \"44db7ec1-3a40-46de-b048-94191897a988\") " pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.454237 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.477947 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.478000 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd4ms\" (UniqueName: \"kubernetes.io/projected/30704a6a-68d7-41ad-9ca5-aa8f3468699b-kube-api-access-dd4ms\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.478011 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.478021 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30704a6a-68d7-41ad-9ca5-aa8f3468699b-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.768505 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2nwnw"] Jan 26 14:27:16 crc kubenswrapper[4922]: W0126 14:27:16.777541 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d76ee55_7df1_42fb_817b_031f44d36f82.slice/crio-2e73c50d3a6b2bf9368d9f75c43224f62c05a4375b095ec43aa6f1c07f4e048e WatchSource:0}: Error finding container 2e73c50d3a6b2bf9368d9f75c43224f62c05a4375b095ec43aa6f1c07f4e048e: Status 404 returned error can't find the container with id 2e73c50d3a6b2bf9368d9f75c43224f62c05a4375b095ec43aa6f1c07f4e048e Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.887038 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bdc6cbc7-b67pp"] Jan 26 14:27:16 crc kubenswrapper[4922]: I0126 14:27:16.952454 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.241133 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" event={"ID":"c79c0339-860e-4665-87fe-bc34b6f31229","Type":"ContainerStarted","Data":"088f7303fc1bfcfda3f33c32e8ab8675cc4963593be5b585a699b99d24d1d1e5"} Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.242192 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2nwnw" event={"ID":"6d76ee55-7df1-42fb-817b-031f44d36f82","Type":"ContainerStarted","Data":"2e73c50d3a6b2bf9368d9f75c43224f62c05a4375b095ec43aa6f1c07f4e048e"} Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.243939 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"44db7ec1-3a40-46de-b048-94191897a988","Type":"ContainerStarted","Data":"aa65e14fb8424bdb48244e68cded53ab7fc118240a16ee740452b36edb42e11a"} Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.244405 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db78fcf6f-bxfkc" Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.308900 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6db78fcf6f-bxfkc"] Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.316664 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6db78fcf6f-bxfkc"] Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.597953 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 14:27:17 crc kubenswrapper[4922]: I0126 14:27:17.598327 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.197272 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bdc6cbc7-b67pp"] Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.229461 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb8c8997-s9zsx"] Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.230877 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.284524 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvt26\" (UniqueName: \"kubernetes.io/projected/e4d5060c-9d3e-4517-8910-0ef46172a190-kube-api-access-gvt26\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.284888 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-dns-svc\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.284980 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.285016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-config\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.285099 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.386413 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.386496 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvt26\" (UniqueName: \"kubernetes.io/projected/e4d5060c-9d3e-4517-8910-0ef46172a190-kube-api-access-gvt26\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.386552 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-dns-svc\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.386663 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.386705 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-config\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.388574 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.389670 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-dns-svc\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.391223 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.392690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-config\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.407864 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvt26\" (UniqueName: \"kubernetes.io/projected/e4d5060c-9d3e-4517-8910-0ef46172a190-kube-api-access-gvt26\") pod \"dnsmasq-dns-6ffb8c8997-s9zsx\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.542157 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30704a6a-68d7-41ad-9ca5-aa8f3468699b" path="/var/lib/kubelet/pods/30704a6a-68d7-41ad-9ca5-aa8f3468699b/volumes" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.542525 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.561043 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:19 crc kubenswrapper[4922]: I0126 14:27:19.572655 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb8c8997-s9zsx"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.130858 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb8c8997-s9zsx"] Jan 26 14:27:20 crc kubenswrapper[4922]: W0126 14:27:20.137279 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4d5060c_9d3e_4517_8910_0ef46172a190.slice/crio-468ad82f871f465e7dcd83cf379c4648a61f9dabcb1b65e566e7301bbfd5cc7e WatchSource:0}: Error finding container 468ad82f871f465e7dcd83cf379c4648a61f9dabcb1b65e566e7301bbfd5cc7e: Status 404 returned error can't find the container with id 468ad82f871f465e7dcd83cf379c4648a61f9dabcb1b65e566e7301bbfd5cc7e Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.231578 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.238177 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.249527 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.250316 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.250831 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.251102 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.251432 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gkc5z" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.317366 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.317411 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/03d225b5-5466-45de-9417-54a11fa79429-lock\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.317444 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.317488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/03d225b5-5466-45de-9417-54a11fa79429-cache\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.317591 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gcj\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-kube-api-access-m4gcj\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.317615 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d225b5-5466-45de-9417-54a11fa79429-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.418936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gcj\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-kube-api-access-m4gcj\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.418978 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d225b5-5466-45de-9417-54a11fa79429-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.419049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.419093 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/03d225b5-5466-45de-9417-54a11fa79429-lock\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.419133 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.419176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/03d225b5-5466-45de-9417-54a11fa79429-cache\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.419600 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/03d225b5-5466-45de-9417-54a11fa79429-cache\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.420498 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.420538 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.420603 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:27:20.920583413 +0000 UTC m=+1058.122846205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.420901 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.421658 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/03d225b5-5466-45de-9417-54a11fa79429-lock\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.427799 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d225b5-5466-45de-9417-54a11fa79429-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.439287 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gcj\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-kube-api-access-m4gcj\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.460207 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.556009 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" event={"ID":"e4d5060c-9d3e-4517-8910-0ef46172a190","Type":"ContainerStarted","Data":"468ad82f871f465e7dcd83cf379c4648a61f9dabcb1b65e566e7301bbfd5cc7e"} Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.558953 4922 generic.go:334] "Generic (PLEG): container finished" podID="1bf34eda-49e0-412d-82b6-fe587116900f" containerID="db29138d2553985c8a0ac700ae413106b67e265d80edda513a17bed8b28ad52e" exitCode=0 Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.558983 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerDied","Data":"db29138d2553985c8a0ac700ae413106b67e265d80edda513a17bed8b28ad52e"} Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.785470 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8rdcm"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.786811 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.789199 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.789214 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.789309 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.819102 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8rdcm"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.830104 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8rdcm"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847411 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-swiftconf\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847495 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-scripts\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847518 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-combined-ca-bundle\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-ring-data-devices\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847593 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4174daed-d224-4a88-86cd-fc1c2df8621e-etc-swift\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847654 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6qr\" (UniqueName: \"kubernetes.io/projected/4174daed-d224-4a88-86cd-fc1c2df8621e-kube-api-access-mr6qr\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.847674 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-dispersionconf\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.852870 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-mr6qr ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-8rdcm" podUID="4174daed-d224-4a88-86cd-fc1c2df8621e" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.875760 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9mb5n"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.879617 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.899126 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9mb5n"] Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949708 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6qr\" (UniqueName: \"kubernetes.io/projected/4174daed-d224-4a88-86cd-fc1c2df8621e-kube-api-access-mr6qr\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949749 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-dispersionconf\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949792 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-ring-data-devices\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949868 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-swiftconf\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949888 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-scripts\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949905 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-combined-ca-bundle\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.949958 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9xb5\" (UniqueName: \"kubernetes.io/projected/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-kube-api-access-l9xb5\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950073 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950144 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-scripts\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950165 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-combined-ca-bundle\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.950189 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.950218 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950228 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-etc-swift\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950252 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-swiftconf\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: E0126 14:27:20.950272 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:27:21.950255272 +0000 UTC m=+1059.152518044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950315 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-ring-data-devices\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950338 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4174daed-d224-4a88-86cd-fc1c2df8621e-etc-swift\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950388 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-dispersionconf\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950810 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4174daed-d224-4a88-86cd-fc1c2df8621e-etc-swift\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.950896 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-scripts\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.951307 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-ring-data-devices\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.953307 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-swiftconf\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.953500 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-dispersionconf\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.955564 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-combined-ca-bundle\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:20 crc kubenswrapper[4922]: I0126 14:27:20.966232 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6qr\" (UniqueName: \"kubernetes.io/projected/4174daed-d224-4a88-86cd-fc1c2df8621e-kube-api-access-mr6qr\") pod \"swift-ring-rebalance-8rdcm\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052403 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-ring-data-devices\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052523 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-scripts\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052558 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-combined-ca-bundle\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052587 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9xb5\" (UniqueName: \"kubernetes.io/projected/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-kube-api-access-l9xb5\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-etc-swift\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-swiftconf\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.052721 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-dispersionconf\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.053400 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-etc-swift\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.053964 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-ring-data-devices\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.054046 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-scripts\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.056006 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-dispersionconf\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.057162 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-swiftconf\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.057742 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-combined-ca-bundle\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.098923 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9xb5\" (UniqueName: \"kubernetes.io/projected/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-kube-api-access-l9xb5\") pod \"swift-ring-rebalance-9mb5n\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.202002 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.566747 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.583814 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.640011 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9mb5n"] Jan 26 14:27:21 crc kubenswrapper[4922]: W0126 14:27:21.652943 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8cb73d7_e172_413f_ad9b_9fdf5afcb2eb.slice/crio-303aef872335ff2a01de93b5644a4fa1a305d36a44a7c62beb86e7b20cb08a46 WatchSource:0}: Error finding container 303aef872335ff2a01de93b5644a4fa1a305d36a44a7c62beb86e7b20cb08a46: Status 404 returned error can't find the container with id 303aef872335ff2a01de93b5644a4fa1a305d36a44a7c62beb86e7b20cb08a46 Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.664871 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4174daed-d224-4a88-86cd-fc1c2df8621e-etc-swift\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.664952 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-ring-data-devices\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.664991 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-dispersionconf\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.665037 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-combined-ca-bundle\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.665087 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-scripts\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.665114 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr6qr\" (UniqueName: \"kubernetes.io/projected/4174daed-d224-4a88-86cd-fc1c2df8621e-kube-api-access-mr6qr\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.665135 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-swiftconf\") pod \"4174daed-d224-4a88-86cd-fc1c2df8621e\" (UID: \"4174daed-d224-4a88-86cd-fc1c2df8621e\") " Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.666863 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4174daed-d224-4a88-86cd-fc1c2df8621e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.667477 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.669046 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-scripts" (OuterVolumeSpecName: "scripts") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.672261 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.673370 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.673903 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4174daed-d224-4a88-86cd-fc1c2df8621e-kube-api-access-mr6qr" (OuterVolumeSpecName: "kube-api-access-mr6qr") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "kube-api-access-mr6qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.673953 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4174daed-d224-4a88-86cd-fc1c2df8621e" (UID: "4174daed-d224-4a88-86cd-fc1c2df8621e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767499 4922 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767536 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767549 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767562 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr6qr\" (UniqueName: \"kubernetes.io/projected/4174daed-d224-4a88-86cd-fc1c2df8621e-kube-api-access-mr6qr\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767576 4922 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4174daed-d224-4a88-86cd-fc1c2df8621e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767587 4922 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4174daed-d224-4a88-86cd-fc1c2df8621e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.767599 4922 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4174daed-d224-4a88-86cd-fc1c2df8621e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:21 crc kubenswrapper[4922]: I0126 14:27:21.973174 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:21 crc kubenswrapper[4922]: E0126 14:27:21.973337 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:21 crc kubenswrapper[4922]: E0126 14:27:21.973554 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:21 crc kubenswrapper[4922]: E0126 14:27:21.973607 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:27:23.973590799 +0000 UTC m=+1061.175853561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:22 crc kubenswrapper[4922]: I0126 14:27:22.577682 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9mb5n" event={"ID":"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb","Type":"ContainerStarted","Data":"303aef872335ff2a01de93b5644a4fa1a305d36a44a7c62beb86e7b20cb08a46"} Jan 26 14:27:22 crc kubenswrapper[4922]: I0126 14:27:22.577700 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rdcm" Jan 26 14:27:22 crc kubenswrapper[4922]: I0126 14:27:22.638178 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-8rdcm"] Jan 26 14:27:22 crc kubenswrapper[4922]: I0126 14:27:22.647366 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-8rdcm"] Jan 26 14:27:23 crc kubenswrapper[4922]: I0126 14:27:23.103847 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4174daed-d224-4a88-86cd-fc1c2df8621e" path="/var/lib/kubelet/pods/4174daed-d224-4a88-86cd-fc1c2df8621e/volumes" Jan 26 14:27:23 crc kubenswrapper[4922]: I0126 14:27:23.587819 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" event={"ID":"c79c0339-860e-4665-87fe-bc34b6f31229","Type":"ContainerStarted","Data":"866b78e3e1de54a0b4530e5f1fa20674b705261836f5acc54fa0a51714b786cd"} Jan 26 14:27:23 crc kubenswrapper[4922]: I0126 14:27:23.589625 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2nwnw" event={"ID":"6d76ee55-7df1-42fb-817b-031f44d36f82","Type":"ContainerStarted","Data":"d2c7a6b4d478f6b199bba34895d3a43cf171d4c956c8eb0bd8eb3329aa0966f7"} Jan 26 14:27:24 crc kubenswrapper[4922]: I0126 14:27:24.005725 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:24 crc kubenswrapper[4922]: E0126 14:27:24.005991 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:24 crc kubenswrapper[4922]: E0126 14:27:24.006040 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:24 crc kubenswrapper[4922]: E0126 14:27:24.006160 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:27:28.006128583 +0000 UTC m=+1065.208391395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:24 crc kubenswrapper[4922]: I0126 14:27:24.600401 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" event={"ID":"e4d5060c-9d3e-4517-8910-0ef46172a190","Type":"ContainerStarted","Data":"b2f8c13e0cd0bc594f4ceb5499ebb25a917383f3af656ceb5c5959464317b2cc"} Jan 26 14:27:24 crc kubenswrapper[4922]: I0126 14:27:24.602640 4922 generic.go:334] "Generic (PLEG): container finished" podID="c79c0339-860e-4665-87fe-bc34b6f31229" containerID="866b78e3e1de54a0b4530e5f1fa20674b705261836f5acc54fa0a51714b786cd" exitCode=0 Jan 26 14:27:24 crc kubenswrapper[4922]: I0126 14:27:24.602677 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" event={"ID":"c79c0339-860e-4665-87fe-bc34b6f31229","Type":"ContainerDied","Data":"866b78e3e1de54a0b4530e5f1fa20674b705261836f5acc54fa0a51714b786cd"} Jan 26 14:27:25 crc kubenswrapper[4922]: I0126 14:27:25.600827 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 14:27:25 crc kubenswrapper[4922]: I0126 14:27:25.600890 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 14:27:25 crc kubenswrapper[4922]: I0126 14:27:25.614330 4922 generic.go:334] "Generic (PLEG): container finished" podID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerID="b2f8c13e0cd0bc594f4ceb5499ebb25a917383f3af656ceb5c5959464317b2cc" exitCode=0 Jan 26 14:27:25 crc kubenswrapper[4922]: I0126 14:27:25.614450 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" event={"ID":"e4d5060c-9d3e-4517-8910-0ef46172a190","Type":"ContainerDied","Data":"b2f8c13e0cd0bc594f4ceb5499ebb25a917383f3af656ceb5c5959464317b2cc"} Jan 26 14:27:25 crc kubenswrapper[4922]: I0126 14:27:25.689782 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2nwnw" podStartSLOduration=10.689762645 podStartE2EDuration="10.689762645s" podCreationTimestamp="2026-01-26 14:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:25.684035764 +0000 UTC m=+1062.886298536" watchObservedRunningTime="2026-01-26 14:27:25.689762645 +0000 UTC m=+1062.892025417" Jan 26 14:27:25 crc kubenswrapper[4922]: I0126 14:27:25.947486 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.075674 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-config\") pod \"c79c0339-860e-4665-87fe-bc34b6f31229\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.075796 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-dns-svc\") pod \"c79c0339-860e-4665-87fe-bc34b6f31229\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.075833 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdbd5\" (UniqueName: \"kubernetes.io/projected/c79c0339-860e-4665-87fe-bc34b6f31229-kube-api-access-gdbd5\") pod \"c79c0339-860e-4665-87fe-bc34b6f31229\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.075979 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-nb\") pod \"c79c0339-860e-4665-87fe-bc34b6f31229\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.076005 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-sb\") pod \"c79c0339-860e-4665-87fe-bc34b6f31229\" (UID: \"c79c0339-860e-4665-87fe-bc34b6f31229\") " Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.080422 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79c0339-860e-4665-87fe-bc34b6f31229-kube-api-access-gdbd5" (OuterVolumeSpecName: "kube-api-access-gdbd5") pod "c79c0339-860e-4665-87fe-bc34b6f31229" (UID: "c79c0339-860e-4665-87fe-bc34b6f31229"). InnerVolumeSpecName "kube-api-access-gdbd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.100475 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c79c0339-860e-4665-87fe-bc34b6f31229" (UID: "c79c0339-860e-4665-87fe-bc34b6f31229"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.102263 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-config" (OuterVolumeSpecName: "config") pod "c79c0339-860e-4665-87fe-bc34b6f31229" (UID: "c79c0339-860e-4665-87fe-bc34b6f31229"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.104393 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c79c0339-860e-4665-87fe-bc34b6f31229" (UID: "c79c0339-860e-4665-87fe-bc34b6f31229"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.105796 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c79c0339-860e-4665-87fe-bc34b6f31229" (UID: "c79c0339-860e-4665-87fe-bc34b6f31229"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.178523 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdbd5\" (UniqueName: \"kubernetes.io/projected/c79c0339-860e-4665-87fe-bc34b6f31229-kube-api-access-gdbd5\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.178578 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.178597 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.178612 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.178629 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c79c0339-860e-4665-87fe-bc34b6f31229-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.624140 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" event={"ID":"e4d5060c-9d3e-4517-8910-0ef46172a190","Type":"ContainerStarted","Data":"fcb4e7b3fe98f08e98fe4a082932aff9e57a0602f7a965cbde94b4a0fd1e802a"} Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.624208 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.627451 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" event={"ID":"c79c0339-860e-4665-87fe-bc34b6f31229","Type":"ContainerDied","Data":"088f7303fc1bfcfda3f33c32e8ab8675cc4963593be5b585a699b99d24d1d1e5"} Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.627482 4922 scope.go:117] "RemoveContainer" containerID="866b78e3e1de54a0b4530e5f1fa20674b705261836f5acc54fa0a51714b786cd" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.627487 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdc6cbc7-b67pp" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.643641 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" podStartSLOduration=7.64361582 podStartE2EDuration="7.64361582s" podCreationTimestamp="2026-01-26 14:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:26.640845607 +0000 UTC m=+1063.843108389" watchObservedRunningTime="2026-01-26 14:27:26.64361582 +0000 UTC m=+1063.845878612" Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.761255 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bdc6cbc7-b67pp"] Jan 26 14:27:26 crc kubenswrapper[4922]: I0126 14:27:26.770518 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bdc6cbc7-b67pp"] Jan 26 14:27:27 crc kubenswrapper[4922]: I0126 14:27:27.104741 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79c0339-860e-4665-87fe-bc34b6f31229" path="/var/lib/kubelet/pods/c79c0339-860e-4665-87fe-bc34b6f31229/volumes" Jan 26 14:27:28 crc kubenswrapper[4922]: I0126 14:27:28.012308 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:28 crc kubenswrapper[4922]: E0126 14:27:28.012566 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:28 crc kubenswrapper[4922]: E0126 14:27:28.012601 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:28 crc kubenswrapper[4922]: E0126 14:27:28.012672 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:27:36.012650686 +0000 UTC m=+1073.214913468 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:30 crc kubenswrapper[4922]: I0126 14:27:30.929992 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 14:27:31 crc kubenswrapper[4922]: I0126 14:27:31.040219 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 14:27:31 crc kubenswrapper[4922]: I0126 14:27:31.794584 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 14:27:31 crc kubenswrapper[4922]: I0126 14:27:31.908879 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.341503 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mnz7q"] Jan 26 14:27:34 crc kubenswrapper[4922]: E0126 14:27:34.342134 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79c0339-860e-4665-87fe-bc34b6f31229" containerName="init" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.342146 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79c0339-860e-4665-87fe-bc34b6f31229" containerName="init" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.342335 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79c0339-860e-4665-87fe-bc34b6f31229" containerName="init" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.342894 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.344425 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.357039 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mnz7q"] Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.441890 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ad42666-6f72-41ee-b3b6-c34eae6345b8-operator-scripts\") pod \"root-account-create-update-mnz7q\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.442153 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fl9\" (UniqueName: \"kubernetes.io/projected/2ad42666-6f72-41ee-b3b6-c34eae6345b8-kube-api-access-57fl9\") pod \"root-account-create-update-mnz7q\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.544834 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ad42666-6f72-41ee-b3b6-c34eae6345b8-operator-scripts\") pod \"root-account-create-update-mnz7q\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.545163 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fl9\" (UniqueName: \"kubernetes.io/projected/2ad42666-6f72-41ee-b3b6-c34eae6345b8-kube-api-access-57fl9\") pod \"root-account-create-update-mnz7q\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.545569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ad42666-6f72-41ee-b3b6-c34eae6345b8-operator-scripts\") pod \"root-account-create-update-mnz7q\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.563261 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.599185 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fl9\" (UniqueName: \"kubernetes.io/projected/2ad42666-6f72-41ee-b3b6-c34eae6345b8-kube-api-access-57fl9\") pod \"root-account-create-update-mnz7q\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.662623 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.704597 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d9f4b769-kl9q7"] Jan 26 14:27:34 crc kubenswrapper[4922]: I0126 14:27:34.704862 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerName="dnsmasq-dns" containerID="cri-o://e6dc51783a8b9a3a9bfcfc9909e1c034f64e1ac9a65d1efff6ff6d991246b371" gracePeriod=10 Jan 26 14:27:35 crc kubenswrapper[4922]: I0126 14:27:35.708373 4922 generic.go:334] "Generic (PLEG): container finished" podID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerID="e6dc51783a8b9a3a9bfcfc9909e1c034f64e1ac9a65d1efff6ff6d991246b371" exitCode=0 Jan 26 14:27:35 crc kubenswrapper[4922]: I0126 14:27:35.708422 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" event={"ID":"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce","Type":"ContainerDied","Data":"e6dc51783a8b9a3a9bfcfc9909e1c034f64e1ac9a65d1efff6ff6d991246b371"} Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.073913 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:36 crc kubenswrapper[4922]: E0126 14:27:36.074227 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:36 crc kubenswrapper[4922]: E0126 14:27:36.074269 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:36 crc kubenswrapper[4922]: E0126 14:27:36.074338 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:27:52.074316033 +0000 UTC m=+1089.276578825 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.882998 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rcb2t"] Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.885752 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.899600 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rcb2t"] Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.995179 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ba35-account-create-update-p8kwd"] Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.996381 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-operator-scripts\") pod \"keystone-db-create-rcb2t\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.996544 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.996681 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2nt\" (UniqueName: \"kubernetes.io/projected/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-kube-api-access-ll2nt\") pod \"keystone-db-create-rcb2t\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:36 crc kubenswrapper[4922]: I0126 14:27:36.998783 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.002572 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ba35-account-create-update-p8kwd"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.097871 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-operator-scripts\") pod \"keystone-db-create-rcb2t\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.097935 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2nt\" (UniqueName: \"kubernetes.io/projected/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-kube-api-access-ll2nt\") pod \"keystone-db-create-rcb2t\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.097988 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64z7\" (UniqueName: \"kubernetes.io/projected/de8907e8-ff60-47a7-a7da-cce27fd8ede1-kube-api-access-b64z7\") pod \"keystone-ba35-account-create-update-p8kwd\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.098024 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8907e8-ff60-47a7-a7da-cce27fd8ede1-operator-scripts\") pod \"keystone-ba35-account-create-update-p8kwd\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.098731 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-operator-scripts\") pod \"keystone-db-create-rcb2t\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.133143 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2nt\" (UniqueName: \"kubernetes.io/projected/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-kube-api-access-ll2nt\") pod \"keystone-db-create-rcb2t\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.179895 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s566l"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.180903 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: E0126 14:27:37.196665 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest" Jan 26 14:27:37 crc kubenswrapper[4922]: E0126 14:27:37.196733 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest" Jan 26 14:27:37 crc kubenswrapper[4922]: E0126 14:27:37.196893 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:38.102.83.230:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:a58f56ff-a36c-4ab2-941e-798b15c71ca2,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9xb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-9mb5n_openstack(a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:27:37 crc kubenswrapper[4922]: E0126 14:27:37.198325 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-9mb5n" podUID="a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.199637 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64z7\" (UniqueName: \"kubernetes.io/projected/de8907e8-ff60-47a7-a7da-cce27fd8ede1-kube-api-access-b64z7\") pod \"keystone-ba35-account-create-update-p8kwd\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.199869 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8907e8-ff60-47a7-a7da-cce27fd8ede1-operator-scripts\") pod \"keystone-ba35-account-create-update-p8kwd\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.216672 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8907e8-ff60-47a7-a7da-cce27fd8ede1-operator-scripts\") pod \"keystone-ba35-account-create-update-p8kwd\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.222458 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.235191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64z7\" (UniqueName: \"kubernetes.io/projected/de8907e8-ff60-47a7-a7da-cce27fd8ede1-kube-api-access-b64z7\") pod \"keystone-ba35-account-create-update-p8kwd\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.235271 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s566l"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.296938 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6410-account-create-update-5f45b"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.298813 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.300666 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.301801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-operator-scripts\") pod \"placement-db-create-s566l\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.302001 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4h9\" (UniqueName: \"kubernetes.io/projected/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-kube-api-access-hl4h9\") pod \"placement-db-create-s566l\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.310523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.323313 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6410-account-create-update-5f45b"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.358772 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403163 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-dns-svc\") pod \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403253 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-config\") pod \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403326 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz85x\" (UniqueName: \"kubernetes.io/projected/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-kube-api-access-nz85x\") pod \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\" (UID: \"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce\") " Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403591 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-operator-scripts\") pod \"placement-db-create-s566l\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403614 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4h9\" (UniqueName: \"kubernetes.io/projected/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-kube-api-access-hl4h9\") pod \"placement-db-create-s566l\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403677 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4d5q\" (UniqueName: \"kubernetes.io/projected/560e44e8-1468-48d7-90b7-d205bdb05f9d-kube-api-access-s4d5q\") pod \"placement-6410-account-create-update-5f45b\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.403747 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/560e44e8-1468-48d7-90b7-d205bdb05f9d-operator-scripts\") pod \"placement-6410-account-create-update-5f45b\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.404277 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-operator-scripts\") pod \"placement-db-create-s566l\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.417960 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-kube-api-access-nz85x" (OuterVolumeSpecName: "kube-api-access-nz85x") pod "07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" (UID: "07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce"). InnerVolumeSpecName "kube-api-access-nz85x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.436718 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4h9\" (UniqueName: \"kubernetes.io/projected/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-kube-api-access-hl4h9\") pod \"placement-db-create-s566l\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.497681 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-config" (OuterVolumeSpecName: "config") pod "07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" (UID: "07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.500504 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" (UID: "07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.505612 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4d5q\" (UniqueName: \"kubernetes.io/projected/560e44e8-1468-48d7-90b7-d205bdb05f9d-kube-api-access-s4d5q\") pod \"placement-6410-account-create-update-5f45b\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.505817 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/560e44e8-1468-48d7-90b7-d205bdb05f9d-operator-scripts\") pod \"placement-6410-account-create-update-5f45b\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.505949 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz85x\" (UniqueName: \"kubernetes.io/projected/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-kube-api-access-nz85x\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.506016 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.506134 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.506991 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/560e44e8-1468-48d7-90b7-d205bdb05f9d-operator-scripts\") pod \"placement-6410-account-create-update-5f45b\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.526354 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4d5q\" (UniqueName: \"kubernetes.io/projected/560e44e8-1468-48d7-90b7-d205bdb05f9d-kube-api-access-s4d5q\") pod \"placement-6410-account-create-update-5f45b\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.647552 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s566l" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.713390 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rcb2t"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.715767 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.731475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"44db7ec1-3a40-46de-b048-94191897a988","Type":"ContainerStarted","Data":"9014b8d81529ffb1b2534e44b93416afa0e6347909824e9f4f042cf133b87b30"} Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.733999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" event={"ID":"07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce","Type":"ContainerDied","Data":"1d72db20e70aa6f1db58a9a03f3a6a1d15134723af3ae83ef2c6dbdc0d572a72"} Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.734019 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d9f4b769-kl9q7" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.734052 4922 scope.go:117] "RemoveContainer" containerID="e6dc51783a8b9a3a9bfcfc9909e1c034f64e1ac9a65d1efff6ff6d991246b371" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.738518 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerStarted","Data":"ba35316ec29aa18886606a200946cffea4445a9547bda3a81b4193b56a03f676"} Jan 26 14:27:37 crc kubenswrapper[4922]: E0126 14:27:37.739977 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest\\\"\"" pod="openstack/swift-ring-rebalance-9mb5n" podUID="a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.785318 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d9f4b769-kl9q7"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.792085 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d9f4b769-kl9q7"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.816171 4922 scope.go:117] "RemoveContainer" containerID="d3f5ce1dd97d5276eae2cf0ae3d85d513f085f3e7db4ae183d0cc14b4005f013" Jan 26 14:27:37 crc kubenswrapper[4922]: W0126 14:27:37.826015 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d62f1e_149e_4aa1_b3d3_54cdcb1a2275.slice/crio-a6ebe0ec2bd39644484e43a57b2e227be536eacfbce493328e4f92c1364e6a20 WatchSource:0}: Error finding container a6ebe0ec2bd39644484e43a57b2e227be536eacfbce493328e4f92c1364e6a20: Status 404 returned error can't find the container with id a6ebe0ec2bd39644484e43a57b2e227be536eacfbce493328e4f92c1364e6a20 Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.868135 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mnz7q"] Jan 26 14:27:37 crc kubenswrapper[4922]: I0126 14:27:37.915132 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ba35-account-create-update-p8kwd"] Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.228994 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6410-account-create-update-5f45b"] Jan 26 14:27:38 crc kubenswrapper[4922]: W0126 14:27:38.232853 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod560e44e8_1468_48d7_90b7_d205bdb05f9d.slice/crio-0b6ac2176dd4708ad1959511ce9173cd568a2ffc28fd097ed315dca0724df934 WatchSource:0}: Error finding container 0b6ac2176dd4708ad1959511ce9173cd568a2ffc28fd097ed315dca0724df934: Status 404 returned error can't find the container with id 0b6ac2176dd4708ad1959511ce9173cd568a2ffc28fd097ed315dca0724df934 Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.285175 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s566l"] Jan 26 14:27:38 crc kubenswrapper[4922]: W0126 14:27:38.289023 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd48da751_d5f8_4ef5_b2a0_33864b35ba6c.slice/crio-251c64533aa25380a5dc9b35a1d0a103cce94792f8af5f69d79c8aa4c3ca6285 WatchSource:0}: Error finding container 251c64533aa25380a5dc9b35a1d0a103cce94792f8af5f69d79c8aa4c3ca6285: Status 404 returned error can't find the container with id 251c64533aa25380a5dc9b35a1d0a103cce94792f8af5f69d79c8aa4c3ca6285 Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.749425 4922 generic.go:334] "Generic (PLEG): container finished" podID="de8907e8-ff60-47a7-a7da-cce27fd8ede1" containerID="8cebcb680ace3e173974238d45f9aab1c4c268999f4b8bc267ed9d1947c553ab" exitCode=0 Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.749597 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ba35-account-create-update-p8kwd" event={"ID":"de8907e8-ff60-47a7-a7da-cce27fd8ede1","Type":"ContainerDied","Data":"8cebcb680ace3e173974238d45f9aab1c4c268999f4b8bc267ed9d1947c553ab"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.749773 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ba35-account-create-update-p8kwd" event={"ID":"de8907e8-ff60-47a7-a7da-cce27fd8ede1","Type":"ContainerStarted","Data":"b73fe9572db83dec5d4356272bf245c66b6a3616d6de5600da05140f5cae53c6"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.751269 4922 generic.go:334] "Generic (PLEG): container finished" podID="2ad42666-6f72-41ee-b3b6-c34eae6345b8" containerID="27f01f1b0fc7c2cb0d45acaf162ee20f5cdacea5e8b4eac03e1be1c62ab89d3d" exitCode=0 Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.751321 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mnz7q" event={"ID":"2ad42666-6f72-41ee-b3b6-c34eae6345b8","Type":"ContainerDied","Data":"27f01f1b0fc7c2cb0d45acaf162ee20f5cdacea5e8b4eac03e1be1c62ab89d3d"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.751341 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mnz7q" event={"ID":"2ad42666-6f72-41ee-b3b6-c34eae6345b8","Type":"ContainerStarted","Data":"8c7f63c0dd38b9fae8e43b46c7989689c00b1d27a355a3bf8f3792c9246f03b5"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.752634 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s566l" event={"ID":"d48da751-d5f8-4ef5-b2a0-33864b35ba6c","Type":"ContainerStarted","Data":"306f6e7a38c62f6fc6ed0b9f0d6b1a1e49320d852ec0070152f4991feee54f36"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.752655 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s566l" event={"ID":"d48da751-d5f8-4ef5-b2a0-33864b35ba6c","Type":"ContainerStarted","Data":"251c64533aa25380a5dc9b35a1d0a103cce94792f8af5f69d79c8aa4c3ca6285"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.754281 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"44db7ec1-3a40-46de-b048-94191897a988","Type":"ContainerStarted","Data":"5883a51cf8f4909e7b5cc787337c156e81d6b8ef57906c3e3b27fe44446d564a"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.754679 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.757084 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6410-account-create-update-5f45b" event={"ID":"560e44e8-1468-48d7-90b7-d205bdb05f9d","Type":"ContainerStarted","Data":"9cf8cdb005164cbd233fd5bd637e836d1553871dd6e551dee01b7451ae4955dc"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.757112 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6410-account-create-update-5f45b" event={"ID":"560e44e8-1468-48d7-90b7-d205bdb05f9d","Type":"ContainerStarted","Data":"0b6ac2176dd4708ad1959511ce9173cd568a2ffc28fd097ed315dca0724df934"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.758612 4922 generic.go:334] "Generic (PLEG): container finished" podID="02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" containerID="27c8ed24f5147f682f0bc61427c34ae0325afdd3287d91bb80d9d840315cd574" exitCode=0 Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.758636 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcb2t" event={"ID":"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275","Type":"ContainerDied","Data":"27c8ed24f5147f682f0bc61427c34ae0325afdd3287d91bb80d9d840315cd574"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.758650 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcb2t" event={"ID":"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275","Type":"ContainerStarted","Data":"a6ebe0ec2bd39644484e43a57b2e227be536eacfbce493328e4f92c1364e6a20"} Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.794705 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6410-account-create-update-5f45b" podStartSLOduration=1.7946848370000001 podStartE2EDuration="1.794684837s" podCreationTimestamp="2026-01-26 14:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:38.78496514 +0000 UTC m=+1075.987227922" watchObservedRunningTime="2026-01-26 14:27:38.794684837 +0000 UTC m=+1075.996947619" Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.804542 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-s566l" podStartSLOduration=1.804520865 podStartE2EDuration="1.804520865s" podCreationTimestamp="2026-01-26 14:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:38.801760173 +0000 UTC m=+1076.004022955" watchObservedRunningTime="2026-01-26 14:27:38.804520865 +0000 UTC m=+1076.006783657" Jan 26 14:27:38 crc kubenswrapper[4922]: I0126 14:27:38.853676 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.60042454 podStartE2EDuration="22.853648311s" podCreationTimestamp="2026-01-26 14:27:16 +0000 UTC" firstStartedPulling="2026-01-26 14:27:16.972696363 +0000 UTC m=+1054.174959135" lastFinishedPulling="2026-01-26 14:27:37.225920124 +0000 UTC m=+1074.428182906" observedRunningTime="2026-01-26 14:27:38.844214922 +0000 UTC m=+1076.046477694" watchObservedRunningTime="2026-01-26 14:27:38.853648311 +0000 UTC m=+1076.055911093" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.135186 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" path="/var/lib/kubelet/pods/07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce/volumes" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.136292 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-mlnbh"] Jan 26 14:27:39 crc kubenswrapper[4922]: E0126 14:27:39.136712 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerName="dnsmasq-dns" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.136740 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerName="dnsmasq-dns" Jan 26 14:27:39 crc kubenswrapper[4922]: E0126 14:27:39.136790 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerName="init" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.136803 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerName="init" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.137419 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bb2515-9ed2-4e2a-8b75-3db6fe77d5ce" containerName="dnsmasq-dns" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.138935 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.150980 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-mlnbh"] Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.221545 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-02c3-account-create-update-7ss8l"] Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.223799 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.227434 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.237401 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-02c3-account-create-update-7ss8l"] Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.257226 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pjnk\" (UniqueName: \"kubernetes.io/projected/22235e5e-84ab-4632-98fc-dae804d6e4a4-kube-api-access-2pjnk\") pod \"watcher-db-create-mlnbh\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.257346 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b76d715-9901-4789-971e-8ba3bd1be5a9-operator-scripts\") pod \"watcher-02c3-account-create-update-7ss8l\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.257419 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22235e5e-84ab-4632-98fc-dae804d6e4a4-operator-scripts\") pod \"watcher-db-create-mlnbh\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.257458 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tfv\" (UniqueName: \"kubernetes.io/projected/7b76d715-9901-4789-971e-8ba3bd1be5a9-kube-api-access-27tfv\") pod \"watcher-02c3-account-create-update-7ss8l\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.359012 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b76d715-9901-4789-971e-8ba3bd1be5a9-operator-scripts\") pod \"watcher-02c3-account-create-update-7ss8l\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.359184 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22235e5e-84ab-4632-98fc-dae804d6e4a4-operator-scripts\") pod \"watcher-db-create-mlnbh\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.359255 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tfv\" (UniqueName: \"kubernetes.io/projected/7b76d715-9901-4789-971e-8ba3bd1be5a9-kube-api-access-27tfv\") pod \"watcher-02c3-account-create-update-7ss8l\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.359414 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pjnk\" (UniqueName: \"kubernetes.io/projected/22235e5e-84ab-4632-98fc-dae804d6e4a4-kube-api-access-2pjnk\") pod \"watcher-db-create-mlnbh\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.360173 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22235e5e-84ab-4632-98fc-dae804d6e4a4-operator-scripts\") pod \"watcher-db-create-mlnbh\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.360727 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b76d715-9901-4789-971e-8ba3bd1be5a9-operator-scripts\") pod \"watcher-02c3-account-create-update-7ss8l\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.380841 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pjnk\" (UniqueName: \"kubernetes.io/projected/22235e5e-84ab-4632-98fc-dae804d6e4a4-kube-api-access-2pjnk\") pod \"watcher-db-create-mlnbh\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.392267 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tfv\" (UniqueName: \"kubernetes.io/projected/7b76d715-9901-4789-971e-8ba3bd1be5a9-kube-api-access-27tfv\") pod \"watcher-02c3-account-create-update-7ss8l\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.508185 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.539927 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.771159 4922 generic.go:334] "Generic (PLEG): container finished" podID="d48da751-d5f8-4ef5-b2a0-33864b35ba6c" containerID="306f6e7a38c62f6fc6ed0b9f0d6b1a1e49320d852ec0070152f4991feee54f36" exitCode=0 Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.771805 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s566l" event={"ID":"d48da751-d5f8-4ef5-b2a0-33864b35ba6c","Type":"ContainerDied","Data":"306f6e7a38c62f6fc6ed0b9f0d6b1a1e49320d852ec0070152f4991feee54f36"} Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.773706 4922 generic.go:334] "Generic (PLEG): container finished" podID="560e44e8-1468-48d7-90b7-d205bdb05f9d" containerID="9cf8cdb005164cbd233fd5bd637e836d1553871dd6e551dee01b7451ae4955dc" exitCode=0 Jan 26 14:27:39 crc kubenswrapper[4922]: I0126 14:27:39.773853 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6410-account-create-update-5f45b" event={"ID":"560e44e8-1468-48d7-90b7-d205bdb05f9d","Type":"ContainerDied","Data":"9cf8cdb005164cbd233fd5bd637e836d1553871dd6e551dee01b7451ae4955dc"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.068252 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-mlnbh"] Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.085837 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-02c3-account-create-update-7ss8l"] Jan 26 14:27:40 crc kubenswrapper[4922]: W0126 14:27:40.185844 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b76d715_9901_4789_971e_8ba3bd1be5a9.slice/crio-5ec5753a6b31528e6faab226a938f40c9f6dccae60b4104780b5e637dc89f018 WatchSource:0}: Error finding container 5ec5753a6b31528e6faab226a938f40c9f6dccae60b4104780b5e637dc89f018: Status 404 returned error can't find the container with id 5ec5753a6b31528e6faab226a938f40c9f6dccae60b4104780b5e637dc89f018 Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.188821 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.278633 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2nt\" (UniqueName: \"kubernetes.io/projected/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-kube-api-access-ll2nt\") pod \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.278698 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-operator-scripts\") pod \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\" (UID: \"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275\") " Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.279654 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" (UID: "02d62f1e-149e-4aa1-b3d3-54cdcb1a2275"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.284471 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-kube-api-access-ll2nt" (OuterVolumeSpecName: "kube-api-access-ll2nt") pod "02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" (UID: "02d62f1e-149e-4aa1-b3d3-54cdcb1a2275"). InnerVolumeSpecName "kube-api-access-ll2nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.380791 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2nt\" (UniqueName: \"kubernetes.io/projected/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-kube-api-access-ll2nt\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.381053 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.384668 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.472478 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.483335 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8907e8-ff60-47a7-a7da-cce27fd8ede1-operator-scripts\") pod \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.483423 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b64z7\" (UniqueName: \"kubernetes.io/projected/de8907e8-ff60-47a7-a7da-cce27fd8ede1-kube-api-access-b64z7\") pod \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\" (UID: \"de8907e8-ff60-47a7-a7da-cce27fd8ede1\") " Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.483823 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8907e8-ff60-47a7-a7da-cce27fd8ede1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de8907e8-ff60-47a7-a7da-cce27fd8ede1" (UID: "de8907e8-ff60-47a7-a7da-cce27fd8ede1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.490282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8907e8-ff60-47a7-a7da-cce27fd8ede1-kube-api-access-b64z7" (OuterVolumeSpecName: "kube-api-access-b64z7") pod "de8907e8-ff60-47a7-a7da-cce27fd8ede1" (UID: "de8907e8-ff60-47a7-a7da-cce27fd8ede1"). InnerVolumeSpecName "kube-api-access-b64z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.585122 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fl9\" (UniqueName: \"kubernetes.io/projected/2ad42666-6f72-41ee-b3b6-c34eae6345b8-kube-api-access-57fl9\") pod \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.585178 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ad42666-6f72-41ee-b3b6-c34eae6345b8-operator-scripts\") pod \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\" (UID: \"2ad42666-6f72-41ee-b3b6-c34eae6345b8\") " Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.585579 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8907e8-ff60-47a7-a7da-cce27fd8ede1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.585590 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b64z7\" (UniqueName: \"kubernetes.io/projected/de8907e8-ff60-47a7-a7da-cce27fd8ede1-kube-api-access-b64z7\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.585768 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad42666-6f72-41ee-b3b6-c34eae6345b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ad42666-6f72-41ee-b3b6-c34eae6345b8" (UID: "2ad42666-6f72-41ee-b3b6-c34eae6345b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.589682 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad42666-6f72-41ee-b3b6-c34eae6345b8-kube-api-access-57fl9" (OuterVolumeSpecName: "kube-api-access-57fl9") pod "2ad42666-6f72-41ee-b3b6-c34eae6345b8" (UID: "2ad42666-6f72-41ee-b3b6-c34eae6345b8"). InnerVolumeSpecName "kube-api-access-57fl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.687390 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fl9\" (UniqueName: \"kubernetes.io/projected/2ad42666-6f72-41ee-b3b6-c34eae6345b8-kube-api-access-57fl9\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.687567 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ad42666-6f72-41ee-b3b6-c34eae6345b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.789875 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcb2t" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.789909 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcb2t" event={"ID":"02d62f1e-149e-4aa1-b3d3-54cdcb1a2275","Type":"ContainerDied","Data":"a6ebe0ec2bd39644484e43a57b2e227be536eacfbce493328e4f92c1364e6a20"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.790370 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ebe0ec2bd39644484e43a57b2e227be536eacfbce493328e4f92c1364e6a20" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.792558 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ba35-account-create-update-p8kwd" event={"ID":"de8907e8-ff60-47a7-a7da-cce27fd8ede1","Type":"ContainerDied","Data":"b73fe9572db83dec5d4356272bf245c66b6a3616d6de5600da05140f5cae53c6"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.792602 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73fe9572db83dec5d4356272bf245c66b6a3616d6de5600da05140f5cae53c6" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.792670 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ba35-account-create-update-p8kwd" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.799244 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mnz7q" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.799274 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mnz7q" event={"ID":"2ad42666-6f72-41ee-b3b6-c34eae6345b8","Type":"ContainerDied","Data":"8c7f63c0dd38b9fae8e43b46c7989689c00b1d27a355a3bf8f3792c9246f03b5"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.799630 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7f63c0dd38b9fae8e43b46c7989689c00b1d27a355a3bf8f3792c9246f03b5" Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.807596 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerStarted","Data":"e0ae951b5d6ab181ef690074bf93ba46374ce941165375bba17507591799fb67"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.811878 4922 generic.go:334] "Generic (PLEG): container finished" podID="7b76d715-9901-4789-971e-8ba3bd1be5a9" containerID="a9f49dc17f42b4766399126cc7a3f74b9176b4189aa07abda9a8385c7293685e" exitCode=0 Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.812028 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-02c3-account-create-update-7ss8l" event={"ID":"7b76d715-9901-4789-971e-8ba3bd1be5a9","Type":"ContainerDied","Data":"a9f49dc17f42b4766399126cc7a3f74b9176b4189aa07abda9a8385c7293685e"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.812088 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-02c3-account-create-update-7ss8l" event={"ID":"7b76d715-9901-4789-971e-8ba3bd1be5a9","Type":"ContainerStarted","Data":"5ec5753a6b31528e6faab226a938f40c9f6dccae60b4104780b5e637dc89f018"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.814794 4922 generic.go:334] "Generic (PLEG): container finished" podID="22235e5e-84ab-4632-98fc-dae804d6e4a4" containerID="9b50ed01acbfecf801c8b71ee3866a707395362dd4aabc7d480babb6bec82a62" exitCode=0 Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.815091 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-mlnbh" event={"ID":"22235e5e-84ab-4632-98fc-dae804d6e4a4","Type":"ContainerDied","Data":"9b50ed01acbfecf801c8b71ee3866a707395362dd4aabc7d480babb6bec82a62"} Jan 26 14:27:40 crc kubenswrapper[4922]: I0126 14:27:40.815123 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-mlnbh" event={"ID":"22235e5e-84ab-4632-98fc-dae804d6e4a4","Type":"ContainerStarted","Data":"761fd6b343a1ae1b01a87ac323a15039878610d095512e4463b0a4bf7bae640d"} Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.280218 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s566l" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.285554 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.422790 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl4h9\" (UniqueName: \"kubernetes.io/projected/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-kube-api-access-hl4h9\") pod \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.422901 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-operator-scripts\") pod \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\" (UID: \"d48da751-d5f8-4ef5-b2a0-33864b35ba6c\") " Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.422965 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4d5q\" (UniqueName: \"kubernetes.io/projected/560e44e8-1468-48d7-90b7-d205bdb05f9d-kube-api-access-s4d5q\") pod \"560e44e8-1468-48d7-90b7-d205bdb05f9d\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.423010 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/560e44e8-1468-48d7-90b7-d205bdb05f9d-operator-scripts\") pod \"560e44e8-1468-48d7-90b7-d205bdb05f9d\" (UID: \"560e44e8-1468-48d7-90b7-d205bdb05f9d\") " Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.424039 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d48da751-d5f8-4ef5-b2a0-33864b35ba6c" (UID: "d48da751-d5f8-4ef5-b2a0-33864b35ba6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.424359 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/560e44e8-1468-48d7-90b7-d205bdb05f9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "560e44e8-1468-48d7-90b7-d205bdb05f9d" (UID: "560e44e8-1468-48d7-90b7-d205bdb05f9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.431236 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-kube-api-access-hl4h9" (OuterVolumeSpecName: "kube-api-access-hl4h9") pod "d48da751-d5f8-4ef5-b2a0-33864b35ba6c" (UID: "d48da751-d5f8-4ef5-b2a0-33864b35ba6c"). InnerVolumeSpecName "kube-api-access-hl4h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.431278 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560e44e8-1468-48d7-90b7-d205bdb05f9d-kube-api-access-s4d5q" (OuterVolumeSpecName: "kube-api-access-s4d5q") pod "560e44e8-1468-48d7-90b7-d205bdb05f9d" (UID: "560e44e8-1468-48d7-90b7-d205bdb05f9d"). InnerVolumeSpecName "kube-api-access-s4d5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.524900 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4d5q\" (UniqueName: \"kubernetes.io/projected/560e44e8-1468-48d7-90b7-d205bdb05f9d-kube-api-access-s4d5q\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.524938 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/560e44e8-1468-48d7-90b7-d205bdb05f9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.524951 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl4h9\" (UniqueName: \"kubernetes.io/projected/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-kube-api-access-hl4h9\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.524961 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d48da751-d5f8-4ef5-b2a0-33864b35ba6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.835917 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerID="e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f" exitCode=0 Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.836000 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e3ea763a-f09f-435f-b75d-69e3b9160943","Type":"ContainerDied","Data":"e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f"} Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.838487 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s566l" event={"ID":"d48da751-d5f8-4ef5-b2a0-33864b35ba6c","Type":"ContainerDied","Data":"251c64533aa25380a5dc9b35a1d0a103cce94792f8af5f69d79c8aa4c3ca6285"} Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.838540 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251c64533aa25380a5dc9b35a1d0a103cce94792f8af5f69d79c8aa4c3ca6285" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.838540 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s566l" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.842512 4922 generic.go:334] "Generic (PLEG): container finished" podID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerID="2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754" exitCode=0 Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.842562 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0","Type":"ContainerDied","Data":"2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754"} Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.856586 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6410-account-create-update-5f45b" event={"ID":"560e44e8-1468-48d7-90b7-d205bdb05f9d","Type":"ContainerDied","Data":"0b6ac2176dd4708ad1959511ce9173cd568a2ffc28fd097ed315dca0724df934"} Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.856631 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6ac2176dd4708ad1959511ce9173cd568a2ffc28fd097ed315dca0724df934" Jan 26 14:27:41 crc kubenswrapper[4922]: I0126 14:27:41.856659 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6410-account-create-update-5f45b" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.118835 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.246758 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pjnk\" (UniqueName: \"kubernetes.io/projected/22235e5e-84ab-4632-98fc-dae804d6e4a4-kube-api-access-2pjnk\") pod \"22235e5e-84ab-4632-98fc-dae804d6e4a4\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.246805 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22235e5e-84ab-4632-98fc-dae804d6e4a4-operator-scripts\") pod \"22235e5e-84ab-4632-98fc-dae804d6e4a4\" (UID: \"22235e5e-84ab-4632-98fc-dae804d6e4a4\") " Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.247990 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22235e5e-84ab-4632-98fc-dae804d6e4a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22235e5e-84ab-4632-98fc-dae804d6e4a4" (UID: "22235e5e-84ab-4632-98fc-dae804d6e4a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.265722 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22235e5e-84ab-4632-98fc-dae804d6e4a4-kube-api-access-2pjnk" (OuterVolumeSpecName: "kube-api-access-2pjnk") pod "22235e5e-84ab-4632-98fc-dae804d6e4a4" (UID: "22235e5e-84ab-4632-98fc-dae804d6e4a4"). InnerVolumeSpecName "kube-api-access-2pjnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.335167 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x4rqw" podUID="1a2c2044-5422-40dc-92f5-051f1da6b2a2" containerName="ovn-controller" probeResult="failure" output=< Jan 26 14:27:42 crc kubenswrapper[4922]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 14:27:42 crc kubenswrapper[4922]: > Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.349055 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pjnk\" (UniqueName: \"kubernetes.io/projected/22235e5e-84ab-4632-98fc-dae804d6e4a4-kube-api-access-2pjnk\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.349121 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22235e5e-84ab-4632-98fc-dae804d6e4a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.349124 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.408971 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.450001 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27tfv\" (UniqueName: \"kubernetes.io/projected/7b76d715-9901-4789-971e-8ba3bd1be5a9-kube-api-access-27tfv\") pod \"7b76d715-9901-4789-971e-8ba3bd1be5a9\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.450221 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b76d715-9901-4789-971e-8ba3bd1be5a9-operator-scripts\") pod \"7b76d715-9901-4789-971e-8ba3bd1be5a9\" (UID: \"7b76d715-9901-4789-971e-8ba3bd1be5a9\") " Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.450666 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b76d715-9901-4789-971e-8ba3bd1be5a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b76d715-9901-4789-971e-8ba3bd1be5a9" (UID: "7b76d715-9901-4789-971e-8ba3bd1be5a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.450909 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b76d715-9901-4789-971e-8ba3bd1be5a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.453406 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b76d715-9901-4789-971e-8ba3bd1be5a9-kube-api-access-27tfv" (OuterVolumeSpecName: "kube-api-access-27tfv") pod "7b76d715-9901-4789-971e-8ba3bd1be5a9" (UID: "7b76d715-9901-4789-971e-8ba3bd1be5a9"). InnerVolumeSpecName "kube-api-access-27tfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.552262 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27tfv\" (UniqueName: \"kubernetes.io/projected/7b76d715-9901-4789-971e-8ba3bd1be5a9-kube-api-access-27tfv\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.880422 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mlnbh" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.880774 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-mlnbh" event={"ID":"22235e5e-84ab-4632-98fc-dae804d6e4a4","Type":"ContainerDied","Data":"761fd6b343a1ae1b01a87ac323a15039878610d095512e4463b0a4bf7bae640d"} Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.880913 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="761fd6b343a1ae1b01a87ac323a15039878610d095512e4463b0a4bf7bae640d" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.883139 4922 generic.go:334] "Generic (PLEG): container finished" podID="1881b31a-fd0f-40c8-a098-10888cec43db" containerID="d264fb210e4eb6f10e19d40617b25321efcdd1122070fdff3b7a7d19f57ebfef" exitCode=0 Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.883292 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"1881b31a-fd0f-40c8-a098-10888cec43db","Type":"ContainerDied","Data":"d264fb210e4eb6f10e19d40617b25321efcdd1122070fdff3b7a7d19f57ebfef"} Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.894723 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e3ea763a-f09f-435f-b75d-69e3b9160943","Type":"ContainerStarted","Data":"e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1"} Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.895236 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.911540 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-02c3-account-create-update-7ss8l" event={"ID":"7b76d715-9901-4789-971e-8ba3bd1be5a9","Type":"ContainerDied","Data":"5ec5753a6b31528e6faab226a938f40c9f6dccae60b4104780b5e637dc89f018"} Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.911582 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec5753a6b31528e6faab226a938f40c9f6dccae60b4104780b5e637dc89f018" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.911559 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-02c3-account-create-update-7ss8l" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.922572 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0","Type":"ContainerStarted","Data":"59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c"} Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.923657 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.960803 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.892355423 podStartE2EDuration="1m0.960787267s" podCreationTimestamp="2026-01-26 14:26:42 +0000 UTC" firstStartedPulling="2026-01-26 14:26:59.137362275 +0000 UTC m=+1036.339625047" lastFinishedPulling="2026-01-26 14:27:06.205794109 +0000 UTC m=+1043.408056891" observedRunningTime="2026-01-26 14:27:42.957796159 +0000 UTC m=+1080.160058941" watchObservedRunningTime="2026-01-26 14:27:42.960787267 +0000 UTC m=+1080.163050059" Jan 26 14:27:42 crc kubenswrapper[4922]: I0126 14:27:42.984625 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.654084021 podStartE2EDuration="1m0.984606795s" podCreationTimestamp="2026-01-26 14:26:42 +0000 UTC" firstStartedPulling="2026-01-26 14:26:59.093213812 +0000 UTC m=+1036.295476584" lastFinishedPulling="2026-01-26 14:27:06.423736556 +0000 UTC m=+1043.625999358" observedRunningTime="2026-01-26 14:27:42.982705606 +0000 UTC m=+1080.184968378" watchObservedRunningTime="2026-01-26 14:27:42.984606795 +0000 UTC m=+1080.186869567" Jan 26 14:27:44 crc kubenswrapper[4922]: I0126 14:27:44.941987 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"1881b31a-fd0f-40c8-a098-10888cec43db","Type":"ContainerStarted","Data":"0f33ace33940d9aad6843ca08f5ccb9625da1742e7cb0a9a160c258f715ddb14"} Jan 26 14:27:44 crc kubenswrapper[4922]: I0126 14:27:44.942683 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:27:44 crc kubenswrapper[4922]: I0126 14:27:44.944755 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerStarted","Data":"26b9162a7a8a7bd5d596bbbadb2a769aec7bf9beec5a4756e37a2251bd568779"} Jan 26 14:27:45 crc kubenswrapper[4922]: I0126 14:27:45.000540 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=55.762790552 podStartE2EDuration="1m3.00052383s" podCreationTimestamp="2026-01-26 14:26:42 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.96651586 +0000 UTC m=+1036.168778622" lastFinishedPulling="2026-01-26 14:27:06.204249128 +0000 UTC m=+1043.406511900" observedRunningTime="2026-01-26 14:27:44.973311083 +0000 UTC m=+1082.175573855" watchObservedRunningTime="2026-01-26 14:27:45.00052383 +0000 UTC m=+1082.202786592" Jan 26 14:27:45 crc kubenswrapper[4922]: I0126 14:27:45.001584 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.663737467 podStartE2EDuration="56.001577878s" podCreationTimestamp="2026-01-26 14:26:49 +0000 UTC" firstStartedPulling="2026-01-26 14:26:58.970993418 +0000 UTC m=+1036.173256190" lastFinishedPulling="2026-01-26 14:27:44.308833789 +0000 UTC m=+1081.511096601" observedRunningTime="2026-01-26 14:27:44.99746499 +0000 UTC m=+1082.199727762" watchObservedRunningTime="2026-01-26 14:27:45.001577878 +0000 UTC m=+1082.203840650" Jan 26 14:27:45 crc kubenswrapper[4922]: I0126 14:27:45.377862 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:45 crc kubenswrapper[4922]: I0126 14:27:45.674739 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mnz7q"] Jan 26 14:27:45 crc kubenswrapper[4922]: I0126 14:27:45.688012 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mnz7q"] Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.103261 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad42666-6f72-41ee-b3b6-c34eae6345b8" path="/var/lib/kubelet/pods/2ad42666-6f72-41ee-b3b6-c34eae6345b8/volumes" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.347637 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x4rqw" podUID="1a2c2044-5422-40dc-92f5-051f1da6b2a2" containerName="ovn-controller" probeResult="failure" output=< Jan 26 14:27:47 crc kubenswrapper[4922]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 14:27:47 crc kubenswrapper[4922]: > Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.400431 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fpgzk" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600239 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x4rqw-config-dlr22"] Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600764 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600782 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600791 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22235e5e-84ab-4632-98fc-dae804d6e4a4" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600797 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="22235e5e-84ab-4632-98fc-dae804d6e4a4" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600815 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560e44e8-1468-48d7-90b7-d205bdb05f9d" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600822 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="560e44e8-1468-48d7-90b7-d205bdb05f9d" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600832 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b76d715-9901-4789-971e-8ba3bd1be5a9" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600838 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b76d715-9901-4789-971e-8ba3bd1be5a9" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600846 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48da751-d5f8-4ef5-b2a0-33864b35ba6c" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600851 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48da751-d5f8-4ef5-b2a0-33864b35ba6c" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600862 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad42666-6f72-41ee-b3b6-c34eae6345b8" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600867 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad42666-6f72-41ee-b3b6-c34eae6345b8" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: E0126 14:27:47.600881 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8907e8-ff60-47a7-a7da-cce27fd8ede1" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.600886 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8907e8-ff60-47a7-a7da-cce27fd8ede1" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601023 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="22235e5e-84ab-4632-98fc-dae804d6e4a4" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601034 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48da751-d5f8-4ef5-b2a0-33864b35ba6c" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601044 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b76d715-9901-4789-971e-8ba3bd1be5a9" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601051 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" containerName="mariadb-database-create" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601079 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad42666-6f72-41ee-b3b6-c34eae6345b8" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601090 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="560e44e8-1468-48d7-90b7-d205bdb05f9d" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601101 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8907e8-ff60-47a7-a7da-cce27fd8ede1" containerName="mariadb-account-create-update" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.601593 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.604776 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.611748 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-dlr22"] Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.779500 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.779641 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run-ovn\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.779768 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgbw\" (UniqueName: \"kubernetes.io/projected/e53f647a-cc36-4bb2-a72c-8575aedbca32-kube-api-access-9zgbw\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.779834 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-scripts\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.779922 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-log-ovn\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.779967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-additional-scripts\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.881181 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-scripts\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.881272 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-log-ovn\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.881296 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-additional-scripts\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.881327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.881379 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run-ovn\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.881446 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgbw\" (UniqueName: \"kubernetes.io/projected/e53f647a-cc36-4bb2-a72c-8575aedbca32-kube-api-access-9zgbw\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.882516 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run-ovn\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.882540 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.882533 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-log-ovn\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.882852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-additional-scripts\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.883986 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-scripts\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.903993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgbw\" (UniqueName: \"kubernetes.io/projected/e53f647a-cc36-4bb2-a72c-8575aedbca32-kube-api-access-9zgbw\") pod \"ovn-controller-x4rqw-config-dlr22\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:47 crc kubenswrapper[4922]: I0126 14:27:47.927023 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:48 crc kubenswrapper[4922]: I0126 14:27:48.478923 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-dlr22"] Jan 26 14:27:48 crc kubenswrapper[4922]: W0126 14:27:48.493983 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53f647a_cc36_4bb2_a72c_8575aedbca32.slice/crio-6c7dd102d00e6fc4e5854e3d6029be8005a451d4e0fa01b8a318a2d308d67d94 WatchSource:0}: Error finding container 6c7dd102d00e6fc4e5854e3d6029be8005a451d4e0fa01b8a318a2d308d67d94: Status 404 returned error can't find the container with id 6c7dd102d00e6fc4e5854e3d6029be8005a451d4e0fa01b8a318a2d308d67d94 Jan 26 14:27:48 crc kubenswrapper[4922]: I0126 14:27:48.979120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-dlr22" event={"ID":"e53f647a-cc36-4bb2-a72c-8575aedbca32","Type":"ContainerStarted","Data":"6c7dd102d00e6fc4e5854e3d6029be8005a451d4e0fa01b8a318a2d308d67d94"} Jan 26 14:27:49 crc kubenswrapper[4922]: I0126 14:27:49.990896 4922 generic.go:334] "Generic (PLEG): container finished" podID="e53f647a-cc36-4bb2-a72c-8575aedbca32" containerID="58b9816cb6028c07e4d32713a778cf725474dc6dcc68f36e368984c76cde6237" exitCode=0 Jan 26 14:27:49 crc kubenswrapper[4922]: I0126 14:27:49.991151 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-dlr22" event={"ID":"e53f647a-cc36-4bb2-a72c-8575aedbca32","Type":"ContainerDied","Data":"58b9816cb6028c07e4d32713a778cf725474dc6dcc68f36e368984c76cde6237"} Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.377537 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.380306 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.682199 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hh472"] Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.683713 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh472" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.685617 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.701141 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hh472"] Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.862457 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fmgs\" (UniqueName: \"kubernetes.io/projected/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-kube-api-access-9fmgs\") pod \"root-account-create-update-hh472\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " pod="openstack/root-account-create-update-hh472" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.862700 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-operator-scripts\") pod \"root-account-create-update-hh472\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " pod="openstack/root-account-create-update-hh472" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.964129 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fmgs\" (UniqueName: \"kubernetes.io/projected/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-kube-api-access-9fmgs\") pod \"root-account-create-update-hh472\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " pod="openstack/root-account-create-update-hh472" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.964272 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-operator-scripts\") pod \"root-account-create-update-hh472\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " pod="openstack/root-account-create-update-hh472" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.965396 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-operator-scripts\") pod \"root-account-create-update-hh472\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " pod="openstack/root-account-create-update-hh472" Jan 26 14:27:50 crc kubenswrapper[4922]: I0126 14:27:50.997791 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fmgs\" (UniqueName: \"kubernetes.io/projected/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-kube-api-access-9fmgs\") pod \"root-account-create-update-hh472\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " pod="openstack/root-account-create-update-hh472" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.006602 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh472" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.015234 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9mb5n" event={"ID":"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb","Type":"ContainerStarted","Data":"42187ea23f551e3a6ecbdaa86f590bfe0c6b4ea6ca6f6f507a6934717190cf42"} Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.021515 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.103130 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9mb5n" podStartSLOduration=2.607419145 podStartE2EDuration="31.103106494s" podCreationTimestamp="2026-01-26 14:27:20 +0000 UTC" firstStartedPulling="2026-01-26 14:27:21.657705848 +0000 UTC m=+1058.859968650" lastFinishedPulling="2026-01-26 14:27:50.153393227 +0000 UTC m=+1087.355655999" observedRunningTime="2026-01-26 14:27:51.048050773 +0000 UTC m=+1088.250313555" watchObservedRunningTime="2026-01-26 14:27:51.103106494 +0000 UTC m=+1088.305369316" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.499477 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.666578 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hh472"] Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.674995 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zgbw\" (UniqueName: \"kubernetes.io/projected/e53f647a-cc36-4bb2-a72c-8575aedbca32-kube-api-access-9zgbw\") pod \"e53f647a-cc36-4bb2-a72c-8575aedbca32\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675085 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-additional-scripts\") pod \"e53f647a-cc36-4bb2-a72c-8575aedbca32\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675177 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run-ovn\") pod \"e53f647a-cc36-4bb2-a72c-8575aedbca32\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675233 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run\") pod \"e53f647a-cc36-4bb2-a72c-8575aedbca32\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675258 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-scripts\") pod \"e53f647a-cc36-4bb2-a72c-8575aedbca32\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675347 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-log-ovn\") pod \"e53f647a-cc36-4bb2-a72c-8575aedbca32\" (UID: \"e53f647a-cc36-4bb2-a72c-8575aedbca32\") " Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675444 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e53f647a-cc36-4bb2-a72c-8575aedbca32" (UID: "e53f647a-cc36-4bb2-a72c-8575aedbca32"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675506 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run" (OuterVolumeSpecName: "var-run") pod "e53f647a-cc36-4bb2-a72c-8575aedbca32" (UID: "e53f647a-cc36-4bb2-a72c-8575aedbca32"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675547 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e53f647a-cc36-4bb2-a72c-8575aedbca32" (UID: "e53f647a-cc36-4bb2-a72c-8575aedbca32"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675768 4922 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675789 4922 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.675800 4922 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e53f647a-cc36-4bb2-a72c-8575aedbca32-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.676164 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e53f647a-cc36-4bb2-a72c-8575aedbca32" (UID: "e53f647a-cc36-4bb2-a72c-8575aedbca32"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.676338 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-scripts" (OuterVolumeSpecName: "scripts") pod "e53f647a-cc36-4bb2-a72c-8575aedbca32" (UID: "e53f647a-cc36-4bb2-a72c-8575aedbca32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.679866 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53f647a-cc36-4bb2-a72c-8575aedbca32-kube-api-access-9zgbw" (OuterVolumeSpecName: "kube-api-access-9zgbw") pod "e53f647a-cc36-4bb2-a72c-8575aedbca32" (UID: "e53f647a-cc36-4bb2-a72c-8575aedbca32"). InnerVolumeSpecName "kube-api-access-9zgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.777257 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.777302 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zgbw\" (UniqueName: \"kubernetes.io/projected/e53f647a-cc36-4bb2-a72c-8575aedbca32-kube-api-access-9zgbw\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:51 crc kubenswrapper[4922]: I0126 14:27:51.777316 4922 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e53f647a-cc36-4bb2-a72c-8575aedbca32-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.021177 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh472" event={"ID":"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2","Type":"ContainerStarted","Data":"b88d0894b35842223f2f2f9e120d9d8d9165a952d40e46caf68d4112267be969"} Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.021218 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh472" event={"ID":"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2","Type":"ContainerStarted","Data":"b2f055b96546b9689368e20805e374b6422396809a6c4b6c00d0c99bd4ae045e"} Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.023589 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-dlr22" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.029142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-dlr22" event={"ID":"e53f647a-cc36-4bb2-a72c-8575aedbca32","Type":"ContainerDied","Data":"6c7dd102d00e6fc4e5854e3d6029be8005a451d4e0fa01b8a318a2d308d67d94"} Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.029200 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7dd102d00e6fc4e5854e3d6029be8005a451d4e0fa01b8a318a2d308d67d94" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.046301 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-hh472" podStartSLOduration=2.046280038 podStartE2EDuration="2.046280038s" podCreationTimestamp="2026-01-26 14:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:52.039009927 +0000 UTC m=+1089.241272699" watchObservedRunningTime="2026-01-26 14:27:52.046280038 +0000 UTC m=+1089.248542820" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.082658 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:27:52 crc kubenswrapper[4922]: E0126 14:27:52.082840 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 14:27:52 crc kubenswrapper[4922]: E0126 14:27:52.082884 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 14:27:52 crc kubenswrapper[4922]: E0126 14:27:52.082946 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift podName:03d225b5-5466-45de-9417-54a11fa79429 nodeName:}" failed. No retries permitted until 2026-01-26 14:28:24.082928775 +0000 UTC m=+1121.285191547 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift") pod "swift-storage-0" (UID: "03d225b5-5466-45de-9417-54a11fa79429") : configmap "swift-ring-files" not found Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.343846 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-x4rqw" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.609256 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x4rqw-config-dlr22"] Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.618807 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x4rqw-config-dlr22"] Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.654759 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x4rqw-config-tghpt"] Jan 26 14:27:52 crc kubenswrapper[4922]: E0126 14:27:52.655110 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53f647a-cc36-4bb2-a72c-8575aedbca32" containerName="ovn-config" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.655125 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53f647a-cc36-4bb2-a72c-8575aedbca32" containerName="ovn-config" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.655301 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53f647a-cc36-4bb2-a72c-8575aedbca32" containerName="ovn-config" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.655827 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.658041 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.668692 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-tghpt"] Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.794081 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-log-ovn\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.794129 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-additional-scripts\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.794150 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run-ovn\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.794384 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvvz\" (UniqueName: \"kubernetes.io/projected/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-kube-api-access-gnvvz\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.794479 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-scripts\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.794555 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.895621 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnvvz\" (UniqueName: \"kubernetes.io/projected/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-kube-api-access-gnvvz\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.895679 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-scripts\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.895715 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.895774 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-log-ovn\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.895796 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-additional-scripts\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.895812 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run-ovn\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.896061 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.896098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run-ovn\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.896090 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-log-ovn\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.896527 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-additional-scripts\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.898516 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-scripts\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.917279 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnvvz\" (UniqueName: \"kubernetes.io/projected/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-kube-api-access-gnvvz\") pod \"ovn-controller-x4rqw-config-tghpt\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:52 crc kubenswrapper[4922]: I0126 14:27:52.973225 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.033006 4922 generic.go:334] "Generic (PLEG): container finished" podID="0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" containerID="b88d0894b35842223f2f2f9e120d9d8d9165a952d40e46caf68d4112267be969" exitCode=0 Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.033049 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh472" event={"ID":"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2","Type":"ContainerDied","Data":"b88d0894b35842223f2f2f9e120d9d8d9165a952d40e46caf68d4112267be969"} Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.108264 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53f647a-cc36-4bb2-a72c-8575aedbca32" path="/var/lib/kubelet/pods/e53f647a-cc36-4bb2-a72c-8575aedbca32/volumes" Jan 26 14:27:53 crc kubenswrapper[4922]: W0126 14:27:53.451699 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80b3afd5_29a7_4b9f_aafa_cae73c273fcd.slice/crio-a74a46d049e331a66b678c3d9d7f16a41086c636d1dc2d9e5ca44eeb21ed7df1 WatchSource:0}: Error finding container a74a46d049e331a66b678c3d9d7f16a41086c636d1dc2d9e5ca44eeb21ed7df1: Status 404 returned error can't find the container with id a74a46d049e331a66b678c3d9d7f16a41086c636d1dc2d9e5ca44eeb21ed7df1 Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.452389 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-tghpt"] Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.505652 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.506085 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="config-reloader" containerID="cri-o://e0ae951b5d6ab181ef690074bf93ba46374ce941165375bba17507591799fb67" gracePeriod=600 Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.506121 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="thanos-sidecar" containerID="cri-o://26b9162a7a8a7bd5d596bbbadb2a769aec7bf9beec5a4756e37a2251bd568779" gracePeriod=600 Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.506033 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="prometheus" containerID="cri-o://ba35316ec29aa18886606a200946cffea4445a9547bda3a81b4193b56a03f676" gracePeriod=600 Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.534565 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 26 14:27:53 crc kubenswrapper[4922]: I0126 14:27:53.798184 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.043146 4922 generic.go:334] "Generic (PLEG): container finished" podID="1bf34eda-49e0-412d-82b6-fe587116900f" containerID="26b9162a7a8a7bd5d596bbbadb2a769aec7bf9beec5a4756e37a2251bd568779" exitCode=0 Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.043183 4922 generic.go:334] "Generic (PLEG): container finished" podID="1bf34eda-49e0-412d-82b6-fe587116900f" containerID="e0ae951b5d6ab181ef690074bf93ba46374ce941165375bba17507591799fb67" exitCode=0 Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.043194 4922 generic.go:334] "Generic (PLEG): container finished" podID="1bf34eda-49e0-412d-82b6-fe587116900f" containerID="ba35316ec29aa18886606a200946cffea4445a9547bda3a81b4193b56a03f676" exitCode=0 Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.043259 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerDied","Data":"26b9162a7a8a7bd5d596bbbadb2a769aec7bf9beec5a4756e37a2251bd568779"} Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.043290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerDied","Data":"e0ae951b5d6ab181ef690074bf93ba46374ce941165375bba17507591799fb67"} Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.043306 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerDied","Data":"ba35316ec29aa18886606a200946cffea4445a9547bda3a81b4193b56a03f676"} Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.045057 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-tghpt" event={"ID":"80b3afd5-29a7-4b9f-aafa-cae73c273fcd","Type":"ContainerStarted","Data":"369c04c6f7f90ae2564ca2a53a473041f8373cf8cdbcf61758c40bce5d94a99e"} Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.045132 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-tghpt" event={"ID":"80b3afd5-29a7-4b9f-aafa-cae73c273fcd","Type":"ContainerStarted","Data":"a74a46d049e331a66b678c3d9d7f16a41086c636d1dc2d9e5ca44eeb21ed7df1"} Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.067238 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x4rqw-config-tghpt" podStartSLOduration=2.067220686 podStartE2EDuration="2.067220686s" podCreationTimestamp="2026-01-26 14:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:27:54.060806847 +0000 UTC m=+1091.263069619" watchObservedRunningTime="2026-01-26 14:27:54.067220686 +0000 UTC m=+1091.269483458" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.121629 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="1881b31a-fd0f-40c8-a098-10888cec43db" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.349452 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh472" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.362460 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-operator-scripts\") pod \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.362674 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fmgs\" (UniqueName: \"kubernetes.io/projected/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-kube-api-access-9fmgs\") pod \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\" (UID: \"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.363820 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" (UID: "0671d2d0-0598-41da-bfaa-a46b7b3a0bf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.382555 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-kube-api-access-9fmgs" (OuterVolumeSpecName: "kube-api-access-9fmgs") pod "0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" (UID: "0671d2d0-0598-41da-bfaa-a46b7b3a0bf2"). InnerVolumeSpecName "kube-api-access-9fmgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.464596 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.464640 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fmgs\" (UniqueName: \"kubernetes.io/projected/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2-kube-api-access-9fmgs\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.517974 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.565990 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-2\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566079 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv6gw\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-kube-api-access-pv6gw\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566141 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-1\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566291 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566331 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-config\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566423 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-0\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566462 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-web-config\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566518 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1bf34eda-49e0-412d-82b6-fe587116900f-config-out\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566571 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-tls-assets\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.566649 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-thanos-prometheus-http-client-file\") pod \"1bf34eda-49e0-412d-82b6-fe587116900f\" (UID: \"1bf34eda-49e0-412d-82b6-fe587116900f\") " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.567941 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.571128 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf34eda-49e0-412d-82b6-fe587116900f-config-out" (OuterVolumeSpecName: "config-out") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.571695 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-config" (OuterVolumeSpecName: "config") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.572474 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.573879 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.573964 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.575701 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.576443 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-kube-api-access-pv6gw" (OuterVolumeSpecName: "kube-api-access-pv6gw") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "kube-api-access-pv6gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.592175 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "pvc-89a18385-0704-44fa-a23b-cb95f8d108b3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.606611 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-web-config" (OuterVolumeSpecName: "web-config") pod "1bf34eda-49e0-412d-82b6-fe587116900f" (UID: "1bf34eda-49e0-412d-82b6-fe587116900f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669396 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669428 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv6gw\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-kube-api-access-pv6gw\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669440 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669510 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") on node \"crc\" " Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669525 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669535 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1bf34eda-49e0-412d-82b6-fe587116900f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669545 4922 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669555 4922 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1bf34eda-49e0-412d-82b6-fe587116900f-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669567 4922 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1bf34eda-49e0-412d-82b6-fe587116900f-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.669575 4922 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1bf34eda-49e0-412d-82b6-fe587116900f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.688216 4922 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.688517 4922 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-89a18385-0704-44fa-a23b-cb95f8d108b3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3") on node "crc" Jan 26 14:27:54 crc kubenswrapper[4922]: I0126 14:27:54.770987 4922 reconciler_common.go:293] "Volume detached for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.055603 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1bf34eda-49e0-412d-82b6-fe587116900f","Type":"ContainerDied","Data":"021f480b92f23db65089625144fbee4f51e2fd76a31042a82788d0bc2fdec6e5"} Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.055638 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.056016 4922 scope.go:117] "RemoveContainer" containerID="26b9162a7a8a7bd5d596bbbadb2a769aec7bf9beec5a4756e37a2251bd568779" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.057865 4922 generic.go:334] "Generic (PLEG): container finished" podID="80b3afd5-29a7-4b9f-aafa-cae73c273fcd" containerID="369c04c6f7f90ae2564ca2a53a473041f8373cf8cdbcf61758c40bce5d94a99e" exitCode=0 Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.058076 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-tghpt" event={"ID":"80b3afd5-29a7-4b9f-aafa-cae73c273fcd","Type":"ContainerDied","Data":"369c04c6f7f90ae2564ca2a53a473041f8373cf8cdbcf61758c40bce5d94a99e"} Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.059736 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh472" event={"ID":"0671d2d0-0598-41da-bfaa-a46b7b3a0bf2","Type":"ContainerDied","Data":"b2f055b96546b9689368e20805e374b6422396809a6c4b6c00d0c99bd4ae045e"} Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.059869 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f055b96546b9689368e20805e374b6422396809a6c4b6c00d0c99bd4ae045e" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.059769 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh472" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.099805 4922 scope.go:117] "RemoveContainer" containerID="e0ae951b5d6ab181ef690074bf93ba46374ce941165375bba17507591799fb67" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.127604 4922 scope.go:117] "RemoveContainer" containerID="ba35316ec29aa18886606a200946cffea4445a9547bda3a81b4193b56a03f676" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.132825 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.145138 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158474 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:27:55 crc kubenswrapper[4922]: E0126 14:27:55.158809 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" containerName="mariadb-account-create-update" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158827 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" containerName="mariadb-account-create-update" Jan 26 14:27:55 crc kubenswrapper[4922]: E0126 14:27:55.158834 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="prometheus" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158841 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="prometheus" Jan 26 14:27:55 crc kubenswrapper[4922]: E0126 14:27:55.158854 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="config-reloader" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158861 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="config-reloader" Jan 26 14:27:55 crc kubenswrapper[4922]: E0126 14:27:55.158876 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="init-config-reloader" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158881 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="init-config-reloader" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158878 4922 scope.go:117] "RemoveContainer" containerID="db29138d2553985c8a0ac700ae413106b67e265d80edda513a17bed8b28ad52e" Jan 26 14:27:55 crc kubenswrapper[4922]: E0126 14:27:55.158893 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="thanos-sidecar" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.158997 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="thanos-sidecar" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.159178 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="config-reloader" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.159191 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" containerName="mariadb-account-create-update" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.159200 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="thanos-sidecar" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.159215 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" containerName="prometheus" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.160807 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.165892 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.179223 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.179397 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.179500 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.179666 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8jmw5" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.179829 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.182678 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.182987 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.187283 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.188280 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286166 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsbxs\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-kube-api-access-wsbxs\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286208 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286231 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286249 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286283 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286314 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286409 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286478 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286511 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8606e862-2e96-4827-9cb1-7c699e93e8a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286607 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286645 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.286680 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388160 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388226 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388304 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388422 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388503 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8606e862-2e96-4827-9cb1-7c699e93e8a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388565 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388607 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388728 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsbxs\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-kube-api-access-wsbxs\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388771 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388806 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388842 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.388896 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.389169 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.389272 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.389876 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.393913 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.395618 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.395647 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b3c4d564f6fc84458e9d6100c084ffc8a5ec4c60a4c8efc38ff6415485a8e6e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.395800 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.398521 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8606e862-2e96-4827-9cb1-7c699e93e8a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.398561 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.398666 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.400841 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.401742 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.408701 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.413468 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsbxs\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-kube-api-access-wsbxs\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.428782 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:55 crc kubenswrapper[4922]: I0126 14:27:55.517145 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.117900 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 14:27:56 crc kubenswrapper[4922]: W0126 14:27:56.127246 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8606e862_2e96_4827_9cb1_7c699e93e8a0.slice/crio-6ad90e902c1f25153bd2eccf04fbcf9f9bb9fa211ae716c8af2106a3a32e5b4c WatchSource:0}: Error finding container 6ad90e902c1f25153bd2eccf04fbcf9f9bb9fa211ae716c8af2106a3a32e5b4c: Status 404 returned error can't find the container with id 6ad90e902c1f25153bd2eccf04fbcf9f9bb9fa211ae716c8af2106a3a32e5b4c Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.416624 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.507646 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-additional-scripts\") pod \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.507726 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-log-ovn\") pod \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.507761 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run-ovn\") pod \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.507797 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnvvz\" (UniqueName: \"kubernetes.io/projected/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-kube-api-access-gnvvz\") pod \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.507833 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run\") pod \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.507933 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-scripts\") pod \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\" (UID: \"80b3afd5-29a7-4b9f-aafa-cae73c273fcd\") " Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508109 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "80b3afd5-29a7-4b9f-aafa-cae73c273fcd" (UID: "80b3afd5-29a7-4b9f-aafa-cae73c273fcd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508163 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "80b3afd5-29a7-4b9f-aafa-cae73c273fcd" (UID: "80b3afd5-29a7-4b9f-aafa-cae73c273fcd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508200 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run" (OuterVolumeSpecName: "var-run") pod "80b3afd5-29a7-4b9f-aafa-cae73c273fcd" (UID: "80b3afd5-29a7-4b9f-aafa-cae73c273fcd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508650 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "80b3afd5-29a7-4b9f-aafa-cae73c273fcd" (UID: "80b3afd5-29a7-4b9f-aafa-cae73c273fcd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508669 4922 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508818 4922 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508890 4922 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.508979 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-scripts" (OuterVolumeSpecName: "scripts") pod "80b3afd5-29a7-4b9f-aafa-cae73c273fcd" (UID: "80b3afd5-29a7-4b9f-aafa-cae73c273fcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.513126 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-kube-api-access-gnvvz" (OuterVolumeSpecName: "kube-api-access-gnvvz") pod "80b3afd5-29a7-4b9f-aafa-cae73c273fcd" (UID: "80b3afd5-29a7-4b9f-aafa-cae73c273fcd"). InnerVolumeSpecName "kube-api-access-gnvvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.518908 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.610240 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.610297 4922 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:56 crc kubenswrapper[4922]: I0126 14:27:56.610310 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnvvz\" (UniqueName: \"kubernetes.io/projected/80b3afd5-29a7-4b9f-aafa-cae73c273fcd-kube-api-access-gnvvz\") on node \"crc\" DevicePath \"\"" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.081120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerStarted","Data":"6ad90e902c1f25153bd2eccf04fbcf9f9bb9fa211ae716c8af2106a3a32e5b4c"} Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.083285 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-tghpt" event={"ID":"80b3afd5-29a7-4b9f-aafa-cae73c273fcd","Type":"ContainerDied","Data":"a74a46d049e331a66b678c3d9d7f16a41086c636d1dc2d9e5ca44eeb21ed7df1"} Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.083448 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74a46d049e331a66b678c3d9d7f16a41086c636d1dc2d9e5ca44eeb21ed7df1" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.083680 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-tghpt" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.103444 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf34eda-49e0-412d-82b6-fe587116900f" path="/var/lib/kubelet/pods/1bf34eda-49e0-412d-82b6-fe587116900f/volumes" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.156753 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x4rqw-config-tghpt"] Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.167717 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x4rqw-config-tghpt"] Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.210812 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x4rqw-config-cbv6f"] Jan 26 14:27:57 crc kubenswrapper[4922]: E0126 14:27:57.211582 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b3afd5-29a7-4b9f-aafa-cae73c273fcd" containerName="ovn-config" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.211675 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b3afd5-29a7-4b9f-aafa-cae73c273fcd" containerName="ovn-config" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.211942 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b3afd5-29a7-4b9f-aafa-cae73c273fcd" containerName="ovn-config" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.212872 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.224005 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.231278 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-cbv6f"] Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.327136 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.327178 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run-ovn\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.327204 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-additional-scripts\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.327231 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-log-ovn\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.327279 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-scripts\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.327303 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlqlz\" (UniqueName: \"kubernetes.io/projected/7a1c17ad-9137-4247-821b-a5ef4c7eb267-kube-api-access-nlqlz\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428217 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-scripts\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428274 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlqlz\" (UniqueName: \"kubernetes.io/projected/7a1c17ad-9137-4247-821b-a5ef4c7eb267-kube-api-access-nlqlz\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428361 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428382 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run-ovn\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428403 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-additional-scripts\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428429 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-log-ovn\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428691 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-log-ovn\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run-ovn\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.428712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.429261 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-additional-scripts\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.430490 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-scripts\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.453109 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlqlz\" (UniqueName: \"kubernetes.io/projected/7a1c17ad-9137-4247-821b-a5ef4c7eb267-kube-api-access-nlqlz\") pod \"ovn-controller-x4rqw-config-cbv6f\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.540057 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:27:57 crc kubenswrapper[4922]: I0126 14:27:57.997993 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-cbv6f"] Jan 26 14:27:58 crc kubenswrapper[4922]: W0126 14:27:58.010904 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a1c17ad_9137_4247_821b_a5ef4c7eb267.slice/crio-205b4f42afebd2469f55f554ebd8fae92c1034b3995a3c7dbe33fc2eee622843 WatchSource:0}: Error finding container 205b4f42afebd2469f55f554ebd8fae92c1034b3995a3c7dbe33fc2eee622843: Status 404 returned error can't find the container with id 205b4f42afebd2469f55f554ebd8fae92c1034b3995a3c7dbe33fc2eee622843 Jan 26 14:27:58 crc kubenswrapper[4922]: I0126 14:27:58.093698 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-cbv6f" event={"ID":"7a1c17ad-9137-4247-821b-a5ef4c7eb267","Type":"ContainerStarted","Data":"205b4f42afebd2469f55f554ebd8fae92c1034b3995a3c7dbe33fc2eee622843"} Jan 26 14:27:59 crc kubenswrapper[4922]: I0126 14:27:59.103326 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b3afd5-29a7-4b9f-aafa-cae73c273fcd" path="/var/lib/kubelet/pods/80b3afd5-29a7-4b9f-aafa-cae73c273fcd/volumes" Jan 26 14:27:59 crc kubenswrapper[4922]: I0126 14:27:59.103751 4922 generic.go:334] "Generic (PLEG): container finished" podID="a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" containerID="42187ea23f551e3a6ecbdaa86f590bfe0c6b4ea6ca6f6f507a6934717190cf42" exitCode=0 Jan 26 14:27:59 crc kubenswrapper[4922]: I0126 14:27:59.104536 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerStarted","Data":"0f2597c1fce9cfcf343b11b6bf506c63e67b3781312bfe98d2963cd4241199d3"} Jan 26 14:27:59 crc kubenswrapper[4922]: I0126 14:27:59.104588 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9mb5n" event={"ID":"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb","Type":"ContainerDied","Data":"42187ea23f551e3a6ecbdaa86f590bfe0c6b4ea6ca6f6f507a6934717190cf42"} Jan 26 14:27:59 crc kubenswrapper[4922]: I0126 14:27:59.105012 4922 generic.go:334] "Generic (PLEG): container finished" podID="7a1c17ad-9137-4247-821b-a5ef4c7eb267" containerID="fd617cfc26cf40afb118fea46b2780dbc5f1237b667542be09c2d22405c07aa3" exitCode=0 Jan 26 14:27:59 crc kubenswrapper[4922]: I0126 14:27:59.105053 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-cbv6f" event={"ID":"7a1c17ad-9137-4247-821b-a5ef4c7eb267","Type":"ContainerDied","Data":"fd617cfc26cf40afb118fea46b2780dbc5f1237b667542be09c2d22405c07aa3"} Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.567999 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.572543 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.687837 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-etc-swift\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.687956 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-scripts\") pod \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688002 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-dispersionconf\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688054 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-combined-ca-bundle\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688108 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-swiftconf\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688136 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-ring-data-devices\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688151 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run-ovn\") pod \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688185 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run\") pod \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688207 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-scripts\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688226 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9xb5\" (UniqueName: \"kubernetes.io/projected/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-kube-api-access-l9xb5\") pod \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\" (UID: \"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688261 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-additional-scripts\") pod \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688317 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-log-ovn\") pod \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688342 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlqlz\" (UniqueName: \"kubernetes.io/projected/7a1c17ad-9137-4247-821b-a5ef4c7eb267-kube-api-access-nlqlz\") pod \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\" (UID: \"7a1c17ad-9137-4247-821b-a5ef4c7eb267\") " Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688509 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run" (OuterVolumeSpecName: "var-run") pod "7a1c17ad-9137-4247-821b-a5ef4c7eb267" (UID: "7a1c17ad-9137-4247-821b-a5ef4c7eb267"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688623 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.688992 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7a1c17ad-9137-4247-821b-a5ef4c7eb267" (UID: "7a1c17ad-9137-4247-821b-a5ef4c7eb267"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.689053 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7a1c17ad-9137-4247-821b-a5ef4c7eb267" (UID: "7a1c17ad-9137-4247-821b-a5ef4c7eb267"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.690270 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.690325 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-scripts" (OuterVolumeSpecName: "scripts") pod "7a1c17ad-9137-4247-821b-a5ef4c7eb267" (UID: "7a1c17ad-9137-4247-821b-a5ef4c7eb267"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.690750 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7a1c17ad-9137-4247-821b-a5ef4c7eb267" (UID: "7a1c17ad-9137-4247-821b-a5ef4c7eb267"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.693560 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-kube-api-access-l9xb5" (OuterVolumeSpecName: "kube-api-access-l9xb5") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "kube-api-access-l9xb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.706476 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.708470 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1c17ad-9137-4247-821b-a5ef4c7eb267-kube-api-access-nlqlz" (OuterVolumeSpecName: "kube-api-access-nlqlz") pod "7a1c17ad-9137-4247-821b-a5ef4c7eb267" (UID: "7a1c17ad-9137-4247-821b-a5ef4c7eb267"). InnerVolumeSpecName "kube-api-access-nlqlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.708679 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-scripts" (OuterVolumeSpecName: "scripts") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.713641 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.714018 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" (UID: "a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789745 4922 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789781 4922 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789791 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlqlz\" (UniqueName: \"kubernetes.io/projected/7a1c17ad-9137-4247-821b-a5ef4c7eb267-kube-api-access-nlqlz\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789803 4922 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789813 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a1c17ad-9137-4247-821b-a5ef4c7eb267-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789822 4922 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789830 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789838 4922 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789846 4922 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789855 4922 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789863 4922 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a1c17ad-9137-4247-821b-a5ef4c7eb267-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789871 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9xb5\" (UniqueName: \"kubernetes.io/projected/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-kube-api-access-l9xb5\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:00 crc kubenswrapper[4922]: I0126 14:28:00.789881 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.128596 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-cbv6f" event={"ID":"7a1c17ad-9137-4247-821b-a5ef4c7eb267","Type":"ContainerDied","Data":"205b4f42afebd2469f55f554ebd8fae92c1034b3995a3c7dbe33fc2eee622843"} Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.128656 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205b4f42afebd2469f55f554ebd8fae92c1034b3995a3c7dbe33fc2eee622843" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.128675 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-cbv6f" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.131778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9mb5n" event={"ID":"a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb","Type":"ContainerDied","Data":"303aef872335ff2a01de93b5644a4fa1a305d36a44a7c62beb86e7b20cb08a46"} Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.131930 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9mb5n" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.131940 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="303aef872335ff2a01de93b5644a4fa1a305d36a44a7c62beb86e7b20cb08a46" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.670357 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x4rqw-config-cbv6f"] Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.678197 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x4rqw-config-cbv6f"] Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.809769 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x4rqw-config-8w6xb"] Jan 26 14:28:01 crc kubenswrapper[4922]: E0126 14:28:01.810408 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" containerName="swift-ring-rebalance" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.810494 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" containerName="swift-ring-rebalance" Jan 26 14:28:01 crc kubenswrapper[4922]: E0126 14:28:01.810555 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1c17ad-9137-4247-821b-a5ef4c7eb267" containerName="ovn-config" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.810606 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1c17ad-9137-4247-821b-a5ef4c7eb267" containerName="ovn-config" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.810805 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb" containerName="swift-ring-rebalance" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.810876 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1c17ad-9137-4247-821b-a5ef4c7eb267" containerName="ovn-config" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.811508 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.814957 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.822585 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-8w6xb"] Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.907447 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-additional-scripts\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.907685 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run-ovn\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.907737 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.907790 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-scripts\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.907873 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-log-ovn\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:01 crc kubenswrapper[4922]: I0126 14:28:01.907955 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftzt\" (UniqueName: \"kubernetes.io/projected/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-kube-api-access-sftzt\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009131 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run-ovn\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009214 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-scripts\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009252 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-log-ovn\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009285 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftzt\" (UniqueName: \"kubernetes.io/projected/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-kube-api-access-sftzt\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009318 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-additional-scripts\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009540 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009549 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run-ovn\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.009644 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-log-ovn\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.010147 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-additional-scripts\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.011567 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-scripts\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.029719 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftzt\" (UniqueName: \"kubernetes.io/projected/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-kube-api-access-sftzt\") pod \"ovn-controller-x4rqw-config-8w6xb\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.187103 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:02 crc kubenswrapper[4922]: I0126 14:28:02.470101 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x4rqw-config-8w6xb"] Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.103732 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1c17ad-9137-4247-821b-a5ef4c7eb267" path="/var/lib/kubelet/pods/7a1c17ad-9137-4247-821b-a5ef4c7eb267/volumes" Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.151971 4922 generic.go:334] "Generic (PLEG): container finished" podID="6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" containerID="638bb3703e0410b6015018709f7753c34e6d312ba6aa1892b9ca9a33c0a68957" exitCode=0 Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.152043 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-8w6xb" event={"ID":"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2","Type":"ContainerDied","Data":"638bb3703e0410b6015018709f7753c34e6d312ba6aa1892b9ca9a33c0a68957"} Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.152139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-8w6xb" event={"ID":"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2","Type":"ContainerStarted","Data":"5c8f9685c9dce5afdccd594737ec86140acc006a702a5c61af59e5c481a15163"} Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.536322 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.800373 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.960055 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8nvcl"] Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.961772 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:03 crc kubenswrapper[4922]: I0126 14:28:03.975713 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8nvcl"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.066195 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zzv6s"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.067281 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.078030 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-operator-scripts\") pod \"barbican-db-create-8nvcl\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.078378 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqxc\" (UniqueName: \"kubernetes.io/projected/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-kube-api-access-nhqxc\") pod \"barbican-db-create-8nvcl\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.080018 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1b47-account-create-update-tb2wz"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.081138 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.088112 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.093488 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zzv6s"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.107869 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1b47-account-create-update-tb2wz"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.125901 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.170121 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d418-account-create-update-v6qdg"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181152 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-operator-scripts\") pod \"barbican-db-create-8nvcl\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181202 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqgx\" (UniqueName: \"kubernetes.io/projected/5acb458f-2080-4c36-86cd-d0e8004b9f9d-kube-api-access-pkqgx\") pod \"barbican-1b47-account-create-update-tb2wz\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181234 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa199068-fd5f-415d-82bd-32fd3b23a926-operator-scripts\") pod \"cinder-db-create-zzv6s\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181260 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acb458f-2080-4c36-86cd-d0e8004b9f9d-operator-scripts\") pod \"barbican-1b47-account-create-update-tb2wz\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181296 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcl2\" (UniqueName: \"kubernetes.io/projected/aa199068-fd5f-415d-82bd-32fd3b23a926-kube-api-access-wtcl2\") pod \"cinder-db-create-zzv6s\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181378 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqxc\" (UniqueName: \"kubernetes.io/projected/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-kube-api-access-nhqxc\") pod \"barbican-db-create-8nvcl\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.181788 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.182282 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-operator-scripts\") pod \"barbican-db-create-8nvcl\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.196650 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.223182 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqxc\" (UniqueName: \"kubernetes.io/projected/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-kube-api-access-nhqxc\") pod \"barbican-db-create-8nvcl\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.245242 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d418-account-create-update-v6qdg"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.282799 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c573fb3-9fba-47ec-8951-b069561ed90e-operator-scripts\") pod \"cinder-d418-account-create-update-v6qdg\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.282850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqgx\" (UniqueName: \"kubernetes.io/projected/5acb458f-2080-4c36-86cd-d0e8004b9f9d-kube-api-access-pkqgx\") pod \"barbican-1b47-account-create-update-tb2wz\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.282887 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa199068-fd5f-415d-82bd-32fd3b23a926-operator-scripts\") pod \"cinder-db-create-zzv6s\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.282921 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acb458f-2080-4c36-86cd-d0e8004b9f9d-operator-scripts\") pod \"barbican-1b47-account-create-update-tb2wz\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.282978 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbwt\" (UniqueName: \"kubernetes.io/projected/4c573fb3-9fba-47ec-8951-b069561ed90e-kube-api-access-2pbwt\") pod \"cinder-d418-account-create-update-v6qdg\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.283002 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcl2\" (UniqueName: \"kubernetes.io/projected/aa199068-fd5f-415d-82bd-32fd3b23a926-kube-api-access-wtcl2\") pod \"cinder-db-create-zzv6s\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.283653 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acb458f-2080-4c36-86cd-d0e8004b9f9d-operator-scripts\") pod \"barbican-1b47-account-create-update-tb2wz\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.283806 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa199068-fd5f-415d-82bd-32fd3b23a926-operator-scripts\") pod \"cinder-db-create-zzv6s\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.300348 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcl2\" (UniqueName: \"kubernetes.io/projected/aa199068-fd5f-415d-82bd-32fd3b23a926-kube-api-access-wtcl2\") pod \"cinder-db-create-zzv6s\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.316000 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqgx\" (UniqueName: \"kubernetes.io/projected/5acb458f-2080-4c36-86cd-d0e8004b9f9d-kube-api-access-pkqgx\") pod \"barbican-1b47-account-create-update-tb2wz\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.317981 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.357544 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-l2gr2"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.361344 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.363264 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.365688 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.365975 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.366786 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j5sp8" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.379302 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l2gr2"] Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.385138 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c573fb3-9fba-47ec-8951-b069561ed90e-operator-scripts\") pod \"cinder-d418-account-create-update-v6qdg\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.385221 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbwt\" (UniqueName: \"kubernetes.io/projected/4c573fb3-9fba-47ec-8951-b069561ed90e-kube-api-access-2pbwt\") pod \"cinder-d418-account-create-update-v6qdg\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.386055 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c573fb3-9fba-47ec-8951-b069561ed90e-operator-scripts\") pod \"cinder-d418-account-create-update-v6qdg\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.390453 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.400507 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.407033 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbwt\" (UniqueName: \"kubernetes.io/projected/4c573fb3-9fba-47ec-8951-b069561ed90e-kube-api-access-2pbwt\") pod \"cinder-d418-account-create-update-v6qdg\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.488444 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-combined-ca-bundle\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.488497 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5nbk\" (UniqueName: \"kubernetes.io/projected/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-kube-api-access-w5nbk\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.488568 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-config-data\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.496507 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.504177 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.589969 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run\") pod \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590341 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-log-ovn\") pod \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590389 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-scripts\") pod \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590413 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-additional-scripts\") pod \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590464 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run-ovn\") pod \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590554 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sftzt\" (UniqueName: \"kubernetes.io/projected/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-kube-api-access-sftzt\") pod \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\" (UID: \"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2\") " Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590767 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-config-data\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590854 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-combined-ca-bundle\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.590876 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5nbk\" (UniqueName: \"kubernetes.io/projected/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-kube-api-access-w5nbk\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.591217 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run" (OuterVolumeSpecName: "var-run") pod "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" (UID: "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.591248 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" (UID: "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.592627 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-scripts" (OuterVolumeSpecName: "scripts") pod "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" (UID: "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.596015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" (UID: "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.596112 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" (UID: "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.613804 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-combined-ca-bundle\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.625423 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-config-data\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.627036 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-kube-api-access-sftzt" (OuterVolumeSpecName: "kube-api-access-sftzt") pod "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" (UID: "6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2"). InnerVolumeSpecName "kube-api-access-sftzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.628792 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5nbk\" (UniqueName: \"kubernetes.io/projected/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-kube-api-access-w5nbk\") pod \"keystone-db-sync-l2gr2\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.694004 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sftzt\" (UniqueName: \"kubernetes.io/projected/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-kube-api-access-sftzt\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.694051 4922 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.694087 4922 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.694101 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.694117 4922 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.694131 4922 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.719115 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:04 crc kubenswrapper[4922]: I0126 14:28:04.935043 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zzv6s"] Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.173454 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8nvcl"] Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.182732 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zzv6s" event={"ID":"aa199068-fd5f-415d-82bd-32fd3b23a926","Type":"ContainerStarted","Data":"e266d2eee357127e56af73fddbec382ed82ea692247dfcd193a2f807a6139386"} Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.184308 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x4rqw-config-8w6xb" event={"ID":"6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2","Type":"ContainerDied","Data":"5c8f9685c9dce5afdccd594737ec86140acc006a702a5c61af59e5c481a15163"} Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.184327 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8f9685c9dce5afdccd594737ec86140acc006a702a5c61af59e5c481a15163" Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.184388 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x4rqw-config-8w6xb" Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.206268 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1b47-account-create-update-tb2wz"] Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.364690 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d418-account-create-update-v6qdg"] Jan 26 14:28:05 crc kubenswrapper[4922]: W0126 14:28:05.371814 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c573fb3_9fba_47ec_8951_b069561ed90e.slice/crio-9c2b468cbb0f8b86ef9d58827232ded8a2ed038acdc149766a23264c646e2427 WatchSource:0}: Error finding container 9c2b468cbb0f8b86ef9d58827232ded8a2ed038acdc149766a23264c646e2427: Status 404 returned error can't find the container with id 9c2b468cbb0f8b86ef9d58827232ded8a2ed038acdc149766a23264c646e2427 Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.384542 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l2gr2"] Jan 26 14:28:05 crc kubenswrapper[4922]: W0126 14:28:05.392212 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5060ea35_5cb6_4f74_8f86_ec622a9c83d4.slice/crio-6df7077ff259f308df1e20014803ada0c05842d35575d5a5343be7446da5da62 WatchSource:0}: Error finding container 6df7077ff259f308df1e20014803ada0c05842d35575d5a5343be7446da5da62: Status 404 returned error can't find the container with id 6df7077ff259f308df1e20014803ada0c05842d35575d5a5343be7446da5da62 Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.607376 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x4rqw-config-8w6xb"] Jan 26 14:28:05 crc kubenswrapper[4922]: I0126 14:28:05.615226 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x4rqw-config-8w6xb"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.197629 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d418-account-create-update-v6qdg" event={"ID":"4c573fb3-9fba-47ec-8951-b069561ed90e","Type":"ContainerStarted","Data":"9c2b468cbb0f8b86ef9d58827232ded8a2ed038acdc149766a23264c646e2427"} Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.199290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8nvcl" event={"ID":"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d","Type":"ContainerStarted","Data":"1e8c87341aebecdd2bef978b8ca2c38657e61f03b7ab2f698ca3cf54bc0e15ce"} Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.214524 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b47-account-create-update-tb2wz" event={"ID":"5acb458f-2080-4c36-86cd-d0e8004b9f9d","Type":"ContainerStarted","Data":"0d7d43061b336b8f83aa56b012b703465afaa9a930fb0c3af69b2d062816d579"} Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.216769 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l2gr2" event={"ID":"5060ea35-5cb6-4f74-8f86-ec622a9c83d4","Type":"ContainerStarted","Data":"6df7077ff259f308df1e20014803ada0c05842d35575d5a5343be7446da5da62"} Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.728105 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hhqtm"] Jan 26 14:28:06 crc kubenswrapper[4922]: E0126 14:28:06.728449 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" containerName="ovn-config" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.728465 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" containerName="ovn-config" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.728636 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" containerName="ovn-config" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.729147 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.738494 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hhqtm"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.807506 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-6mb2d"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.808566 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.813520 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.813819 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-qs5db" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.835936 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-operator-scripts\") pod \"glance-db-create-hhqtm\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.836138 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgsw\" (UniqueName: \"kubernetes.io/projected/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-kube-api-access-9pgsw\") pod \"glance-db-create-hhqtm\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.837489 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-6mb2d"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.846314 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0ea3-account-create-update-6vlln"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.848733 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.851643 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.865405 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0ea3-account-create-update-6vlln"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.941570 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-combined-ca-bundle\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.941816 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnc9\" (UniqueName: \"kubernetes.io/projected/eb180569-7ff8-4908-8bdd-66b681f030df-kube-api-access-bjnc9\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.942023 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pgsw\" (UniqueName: \"kubernetes.io/projected/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-kube-api-access-9pgsw\") pod \"glance-db-create-hhqtm\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.942114 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21553358-b59a-4191-8376-e66491f5eadf-operator-scripts\") pod \"glance-0ea3-account-create-update-6vlln\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.942135 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twmjw\" (UniqueName: \"kubernetes.io/projected/21553358-b59a-4191-8376-e66491f5eadf-kube-api-access-twmjw\") pod \"glance-0ea3-account-create-update-6vlln\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.942165 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-db-sync-config-data\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.942243 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-operator-scripts\") pod \"glance-db-create-hhqtm\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.942306 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-config-data\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.943455 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-operator-scripts\") pod \"glance-db-create-hhqtm\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.962390 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x7466"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.977494 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pgsw\" (UniqueName: \"kubernetes.io/projected/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-kube-api-access-9pgsw\") pod \"glance-db-create-hhqtm\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.995012 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x7466"] Jan 26 14:28:06 crc kubenswrapper[4922]: I0126 14:28:06.997300 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6e159-08af-466b-ac1f-0faab319b9f5-operator-scripts\") pod \"neutron-db-create-x7466\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045686 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21553358-b59a-4191-8376-e66491f5eadf-operator-scripts\") pod \"glance-0ea3-account-create-update-6vlln\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045707 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twmjw\" (UniqueName: \"kubernetes.io/projected/21553358-b59a-4191-8376-e66491f5eadf-kube-api-access-twmjw\") pod \"glance-0ea3-account-create-update-6vlln\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045730 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-db-sync-config-data\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045766 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5shcf\" (UniqueName: \"kubernetes.io/projected/e7b6e159-08af-466b-ac1f-0faab319b9f5-kube-api-access-5shcf\") pod \"neutron-db-create-x7466\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045806 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-config-data\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045836 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-combined-ca-bundle\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.045874 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjnc9\" (UniqueName: \"kubernetes.io/projected/eb180569-7ff8-4908-8bdd-66b681f030df-kube-api-access-bjnc9\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.046931 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21553358-b59a-4191-8376-e66491f5eadf-operator-scripts\") pod \"glance-0ea3-account-create-update-6vlln\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.050640 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-db-sync-config-data\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.056492 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-config-data\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.057058 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-combined-ca-bundle\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.061305 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-643c-account-create-update-v5bs8"] Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.062287 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.064205 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.065814 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twmjw\" (UniqueName: \"kubernetes.io/projected/21553358-b59a-4191-8376-e66491f5eadf-kube-api-access-twmjw\") pod \"glance-0ea3-account-create-update-6vlln\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.067077 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjnc9\" (UniqueName: \"kubernetes.io/projected/eb180569-7ff8-4908-8bdd-66b681f030df-kube-api-access-bjnc9\") pod \"watcher-db-sync-6mb2d\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.072520 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.106775 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2" path="/var/lib/kubelet/pods/6db1c8cf-1e5a-47ca-9350-19f5f4dc7df2/volumes" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.107460 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-643c-account-create-update-v5bs8"] Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.149740 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-operator-scripts\") pod \"neutron-643c-account-create-update-v5bs8\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.149815 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5299\" (UniqueName: \"kubernetes.io/projected/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-kube-api-access-s5299\") pod \"neutron-643c-account-create-update-v5bs8\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.149884 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6e159-08af-466b-ac1f-0faab319b9f5-operator-scripts\") pod \"neutron-db-create-x7466\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.149963 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5shcf\" (UniqueName: \"kubernetes.io/projected/e7b6e159-08af-466b-ac1f-0faab319b9f5-kube-api-access-5shcf\") pod \"neutron-db-create-x7466\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.151344 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6e159-08af-466b-ac1f-0faab319b9f5-operator-scripts\") pod \"neutron-db-create-x7466\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.173362 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5shcf\" (UniqueName: \"kubernetes.io/projected/e7b6e159-08af-466b-ac1f-0faab319b9f5-kube-api-access-5shcf\") pod \"neutron-db-create-x7466\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.194026 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.211177 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.244806 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8nvcl" event={"ID":"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d","Type":"ContainerStarted","Data":"b39ee1e538e33aed07488b710fbb6c6b9358a3025f2e53a35c623842d62e0b25"} Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.251895 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-operator-scripts\") pod \"neutron-643c-account-create-update-v5bs8\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.252097 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5299\" (UniqueName: \"kubernetes.io/projected/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-kube-api-access-s5299\") pod \"neutron-643c-account-create-update-v5bs8\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.253291 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-operator-scripts\") pod \"neutron-643c-account-create-update-v5bs8\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.257575 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zzv6s" event={"ID":"aa199068-fd5f-415d-82bd-32fd3b23a926","Type":"ContainerStarted","Data":"12cc4b04e483e50921a5f7033ba49c4751f5bc1540ee817537e89456d5a5037e"} Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.259311 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b47-account-create-update-tb2wz" event={"ID":"5acb458f-2080-4c36-86cd-d0e8004b9f9d","Type":"ContainerStarted","Data":"bbe497081ab22b87d3a07a74cf62dc7806bcda9ef4bcdfac0c9de2046958fc6a"} Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.288825 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5299\" (UniqueName: \"kubernetes.io/projected/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-kube-api-access-s5299\") pod \"neutron-643c-account-create-update-v5bs8\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.296864 4922 generic.go:334] "Generic (PLEG): container finished" podID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerID="0f2597c1fce9cfcf343b11b6bf506c63e67b3781312bfe98d2963cd4241199d3" exitCode=0 Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.297002 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerDied","Data":"0f2597c1fce9cfcf343b11b6bf506c63e67b3781312bfe98d2963cd4241199d3"} Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.309939 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-zzv6s" podStartSLOduration=3.309915975 podStartE2EDuration="3.309915975s" podCreationTimestamp="2026-01-26 14:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:07.308867685 +0000 UTC m=+1104.511130457" watchObservedRunningTime="2026-01-26 14:28:07.309915975 +0000 UTC m=+1104.512178747" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.311657 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d418-account-create-update-v6qdg" event={"ID":"4c573fb3-9fba-47ec-8951-b069561ed90e","Type":"ContainerStarted","Data":"2eaccac50b73a3d2c5ffce13fe7b7a4d6f075b40b091362a3b3a9c98c8c83bbc"} Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.319774 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8nvcl" podStartSLOduration=4.319751231 podStartE2EDuration="4.319751231s" podCreationTimestamp="2026-01-26 14:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:07.270594768 +0000 UTC m=+1104.472857540" watchObservedRunningTime="2026-01-26 14:28:07.319751231 +0000 UTC m=+1104.522014003" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.334110 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1b47-account-create-update-tb2wz" podStartSLOduration=3.334085645 podStartE2EDuration="3.334085645s" podCreationTimestamp="2026-01-26 14:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:07.328038295 +0000 UTC m=+1104.530301067" watchObservedRunningTime="2026-01-26 14:28:07.334085645 +0000 UTC m=+1104.536348417" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.348522 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7466" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.349822 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d418-account-create-update-v6qdg" podStartSLOduration=3.349803608 podStartE2EDuration="3.349803608s" podCreationTimestamp="2026-01-26 14:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:07.348047368 +0000 UTC m=+1104.550310150" watchObservedRunningTime="2026-01-26 14:28:07.349803608 +0000 UTC m=+1104.552066380" Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.411876 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hhqtm"] Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.469039 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:07 crc kubenswrapper[4922]: W0126 14:28:07.556191 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1526bd7f_501e_4add_b2c3_e1f6f803d3cf.slice/crio-960c08303c5bf359ad4ad6c68d870c60dad0dfc7ec6b64f61c9112cfb2c84975 WatchSource:0}: Error finding container 960c08303c5bf359ad4ad6c68d870c60dad0dfc7ec6b64f61c9112cfb2c84975: Status 404 returned error can't find the container with id 960c08303c5bf359ad4ad6c68d870c60dad0dfc7ec6b64f61c9112cfb2c84975 Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.772887 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-6mb2d"] Jan 26 14:28:07 crc kubenswrapper[4922]: I0126 14:28:07.930899 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0ea3-account-create-update-6vlln"] Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.010124 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x7466"] Jan 26 14:28:08 crc kubenswrapper[4922]: W0126 14:28:08.021359 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7b6e159_08af_466b_ac1f_0faab319b9f5.slice/crio-9207bbebc2f8916f340f21995ba48c3c9c119634b4206312a781c515c33a8b4b WatchSource:0}: Error finding container 9207bbebc2f8916f340f21995ba48c3c9c119634b4206312a781c515c33a8b4b: Status 404 returned error can't find the container with id 9207bbebc2f8916f340f21995ba48c3c9c119634b4206312a781c515c33a8b4b Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.115481 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-643c-account-create-update-v5bs8"] Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.321514 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerStarted","Data":"6030b56f040caf8ee74a8cb47fb8055b8d271057e3dc42ba53563f728809542a"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.322492 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-643c-account-create-update-v5bs8" event={"ID":"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705","Type":"ContainerStarted","Data":"67f8bf45bbda36af93ffc837d75a53b5333ad5be9c46785b75b8c4845a1e5510"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.323744 4922 generic.go:334] "Generic (PLEG): container finished" podID="4c573fb3-9fba-47ec-8951-b069561ed90e" containerID="2eaccac50b73a3d2c5ffce13fe7b7a4d6f075b40b091362a3b3a9c98c8c83bbc" exitCode=0 Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.323786 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d418-account-create-update-v6qdg" event={"ID":"4c573fb3-9fba-47ec-8951-b069561ed90e","Type":"ContainerDied","Data":"2eaccac50b73a3d2c5ffce13fe7b7a4d6f075b40b091362a3b3a9c98c8c83bbc"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.325156 4922 generic.go:334] "Generic (PLEG): container finished" podID="aa199068-fd5f-415d-82bd-32fd3b23a926" containerID="12cc4b04e483e50921a5f7033ba49c4751f5bc1540ee817537e89456d5a5037e" exitCode=0 Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.325195 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zzv6s" event={"ID":"aa199068-fd5f-415d-82bd-32fd3b23a926","Type":"ContainerDied","Data":"12cc4b04e483e50921a5f7033ba49c4751f5bc1540ee817537e89456d5a5037e"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.326272 4922 generic.go:334] "Generic (PLEG): container finished" podID="1526bd7f-501e-4add-b2c3-e1f6f803d3cf" containerID="2ebd0308850195c47ef3e9b1aa49bc8b04a9b447707a5c28538c6dd1dc288226" exitCode=0 Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.326311 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hhqtm" event={"ID":"1526bd7f-501e-4add-b2c3-e1f6f803d3cf","Type":"ContainerDied","Data":"2ebd0308850195c47ef3e9b1aa49bc8b04a9b447707a5c28538c6dd1dc288226"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.326325 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hhqtm" event={"ID":"1526bd7f-501e-4add-b2c3-e1f6f803d3cf","Type":"ContainerStarted","Data":"960c08303c5bf359ad4ad6c68d870c60dad0dfc7ec6b64f61c9112cfb2c84975"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.327461 4922 generic.go:334] "Generic (PLEG): container finished" podID="5acb458f-2080-4c36-86cd-d0e8004b9f9d" containerID="bbe497081ab22b87d3a07a74cf62dc7806bcda9ef4bcdfac0c9de2046958fc6a" exitCode=0 Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.327505 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b47-account-create-update-tb2wz" event={"ID":"5acb458f-2080-4c36-86cd-d0e8004b9f9d","Type":"ContainerDied","Data":"bbe497081ab22b87d3a07a74cf62dc7806bcda9ef4bcdfac0c9de2046958fc6a"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.329090 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ea3-account-create-update-6vlln" event={"ID":"21553358-b59a-4191-8376-e66491f5eadf","Type":"ContainerStarted","Data":"f723fec3ba6589a0da3c13a86dcf560e9ee07683222299d3f30523d96eff043f"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.329114 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ea3-account-create-update-6vlln" event={"ID":"21553358-b59a-4191-8376-e66491f5eadf","Type":"ContainerStarted","Data":"2fe1685f36784ac8ca3ab0fd903384a18f79c755b25fa827ff73a9870f2717cd"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.331697 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7466" event={"ID":"e7b6e159-08af-466b-ac1f-0faab319b9f5","Type":"ContainerStarted","Data":"5b1f6bc4e5fbf6288fefd5266a9f1b259d9a46b28a4e3dcc8214f6ecadcecae4"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.331724 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7466" event={"ID":"e7b6e159-08af-466b-ac1f-0faab319b9f5","Type":"ContainerStarted","Data":"9207bbebc2f8916f340f21995ba48c3c9c119634b4206312a781c515c33a8b4b"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.335836 4922 generic.go:334] "Generic (PLEG): container finished" podID="ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" containerID="b39ee1e538e33aed07488b710fbb6c6b9358a3025f2e53a35c623842d62e0b25" exitCode=0 Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.335911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8nvcl" event={"ID":"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d","Type":"ContainerDied","Data":"b39ee1e538e33aed07488b710fbb6c6b9358a3025f2e53a35c623842d62e0b25"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.342257 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-6mb2d" event={"ID":"eb180569-7ff8-4908-8bdd-66b681f030df","Type":"ContainerStarted","Data":"b3f2c014d975659f9da57e701d5374987284ac3632cfba5e376377e9cf190a21"} Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.363862 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-x7466" podStartSLOduration=2.363846004 podStartE2EDuration="2.363846004s" podCreationTimestamp="2026-01-26 14:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:08.355008605 +0000 UTC m=+1105.557271397" watchObservedRunningTime="2026-01-26 14:28:08.363846004 +0000 UTC m=+1105.566108776" Jan 26 14:28:08 crc kubenswrapper[4922]: I0126 14:28:08.403195 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-0ea3-account-create-update-6vlln" podStartSLOduration=2.403179732 podStartE2EDuration="2.403179732s" podCreationTimestamp="2026-01-26 14:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:08.399496298 +0000 UTC m=+1105.601759070" watchObservedRunningTime="2026-01-26 14:28:08.403179732 +0000 UTC m=+1105.605442504" Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.355176 4922 generic.go:334] "Generic (PLEG): container finished" podID="d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" containerID="2941f915fa06b7f4a3aff91d956d8dcd088c1692803c8a0fa9f65f5dccbde398" exitCode=0 Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.355265 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-643c-account-create-update-v5bs8" event={"ID":"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705","Type":"ContainerDied","Data":"2941f915fa06b7f4a3aff91d956d8dcd088c1692803c8a0fa9f65f5dccbde398"} Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.357728 4922 generic.go:334] "Generic (PLEG): container finished" podID="21553358-b59a-4191-8376-e66491f5eadf" containerID="f723fec3ba6589a0da3c13a86dcf560e9ee07683222299d3f30523d96eff043f" exitCode=0 Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.357843 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ea3-account-create-update-6vlln" event={"ID":"21553358-b59a-4191-8376-e66491f5eadf","Type":"ContainerDied","Data":"f723fec3ba6589a0da3c13a86dcf560e9ee07683222299d3f30523d96eff043f"} Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.360188 4922 generic.go:334] "Generic (PLEG): container finished" podID="e7b6e159-08af-466b-ac1f-0faab319b9f5" containerID="5b1f6bc4e5fbf6288fefd5266a9f1b259d9a46b28a4e3dcc8214f6ecadcecae4" exitCode=0 Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.360321 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7466" event={"ID":"e7b6e159-08af-466b-ac1f-0faab319b9f5","Type":"ContainerDied","Data":"5b1f6bc4e5fbf6288fefd5266a9f1b259d9a46b28a4e3dcc8214f6ecadcecae4"} Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.882846 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.893395 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.934049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c573fb3-9fba-47ec-8951-b069561ed90e-operator-scripts\") pod \"4c573fb3-9fba-47ec-8951-b069561ed90e\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.934160 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pgsw\" (UniqueName: \"kubernetes.io/projected/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-kube-api-access-9pgsw\") pod \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.934247 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-operator-scripts\") pod \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\" (UID: \"1526bd7f-501e-4add-b2c3-e1f6f803d3cf\") " Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.934410 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pbwt\" (UniqueName: \"kubernetes.io/projected/4c573fb3-9fba-47ec-8951-b069561ed90e-kube-api-access-2pbwt\") pod \"4c573fb3-9fba-47ec-8951-b069561ed90e\" (UID: \"4c573fb3-9fba-47ec-8951-b069561ed90e\") " Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.935518 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1526bd7f-501e-4add-b2c3-e1f6f803d3cf" (UID: "1526bd7f-501e-4add-b2c3-e1f6f803d3cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:09 crc kubenswrapper[4922]: I0126 14:28:09.935529 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c573fb3-9fba-47ec-8951-b069561ed90e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c573fb3-9fba-47ec-8951-b069561ed90e" (UID: "4c573fb3-9fba-47ec-8951-b069561ed90e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.036288 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.036838 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c573fb3-9fba-47ec-8951-b069561ed90e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.065818 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-kube-api-access-9pgsw" (OuterVolumeSpecName: "kube-api-access-9pgsw") pod "1526bd7f-501e-4add-b2c3-e1f6f803d3cf" (UID: "1526bd7f-501e-4add-b2c3-e1f6f803d3cf"). InnerVolumeSpecName "kube-api-access-9pgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.066167 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c573fb3-9fba-47ec-8951-b069561ed90e-kube-api-access-2pbwt" (OuterVolumeSpecName: "kube-api-access-2pbwt") pod "4c573fb3-9fba-47ec-8951-b069561ed90e" (UID: "4c573fb3-9fba-47ec-8951-b069561ed90e"). InnerVolumeSpecName "kube-api-access-2pbwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.139607 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pbwt\" (UniqueName: \"kubernetes.io/projected/4c573fb3-9fba-47ec-8951-b069561ed90e-kube-api-access-2pbwt\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.139642 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pgsw\" (UniqueName: \"kubernetes.io/projected/1526bd7f-501e-4add-b2c3-e1f6f803d3cf-kube-api-access-9pgsw\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.216051 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.224727 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.230108 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.241034 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhqxc\" (UniqueName: \"kubernetes.io/projected/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-kube-api-access-nhqxc\") pod \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.241346 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acb458f-2080-4c36-86cd-d0e8004b9f9d-operator-scripts\") pod \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.241850 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" (UID: "ba5539a2-9e43-4f3b-8dbf-14d091e7b37d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.243469 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5acb458f-2080-4c36-86cd-d0e8004b9f9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5acb458f-2080-4c36-86cd-d0e8004b9f9d" (UID: "5acb458f-2080-4c36-86cd-d0e8004b9f9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.245097 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-operator-scripts\") pod \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\" (UID: \"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d\") " Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.245312 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkqgx\" (UniqueName: \"kubernetes.io/projected/5acb458f-2080-4c36-86cd-d0e8004b9f9d-kube-api-access-pkqgx\") pod \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\" (UID: \"5acb458f-2080-4c36-86cd-d0e8004b9f9d\") " Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.248877 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acb458f-2080-4c36-86cd-d0e8004b9f9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.248905 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.297604 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-kube-api-access-nhqxc" (OuterVolumeSpecName: "kube-api-access-nhqxc") pod "ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" (UID: "ba5539a2-9e43-4f3b-8dbf-14d091e7b37d"). InnerVolumeSpecName "kube-api-access-nhqxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.298625 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5acb458f-2080-4c36-86cd-d0e8004b9f9d-kube-api-access-pkqgx" (OuterVolumeSpecName: "kube-api-access-pkqgx") pod "5acb458f-2080-4c36-86cd-d0e8004b9f9d" (UID: "5acb458f-2080-4c36-86cd-d0e8004b9f9d"). InnerVolumeSpecName "kube-api-access-pkqgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.349516 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcl2\" (UniqueName: \"kubernetes.io/projected/aa199068-fd5f-415d-82bd-32fd3b23a926-kube-api-access-wtcl2\") pod \"aa199068-fd5f-415d-82bd-32fd3b23a926\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.349614 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa199068-fd5f-415d-82bd-32fd3b23a926-operator-scripts\") pod \"aa199068-fd5f-415d-82bd-32fd3b23a926\" (UID: \"aa199068-fd5f-415d-82bd-32fd3b23a926\") " Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.350019 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkqgx\" (UniqueName: \"kubernetes.io/projected/5acb458f-2080-4c36-86cd-d0e8004b9f9d-kube-api-access-pkqgx\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.350037 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhqxc\" (UniqueName: \"kubernetes.io/projected/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d-kube-api-access-nhqxc\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.350191 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa199068-fd5f-415d-82bd-32fd3b23a926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa199068-fd5f-415d-82bd-32fd3b23a926" (UID: "aa199068-fd5f-415d-82bd-32fd3b23a926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.354526 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa199068-fd5f-415d-82bd-32fd3b23a926-kube-api-access-wtcl2" (OuterVolumeSpecName: "kube-api-access-wtcl2") pod "aa199068-fd5f-415d-82bd-32fd3b23a926" (UID: "aa199068-fd5f-415d-82bd-32fd3b23a926"). InnerVolumeSpecName "kube-api-access-wtcl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.370352 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d418-account-create-update-v6qdg" event={"ID":"4c573fb3-9fba-47ec-8951-b069561ed90e","Type":"ContainerDied","Data":"9c2b468cbb0f8b86ef9d58827232ded8a2ed038acdc149766a23264c646e2427"} Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.370611 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2b468cbb0f8b86ef9d58827232ded8a2ed038acdc149766a23264c646e2427" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.370714 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d418-account-create-update-v6qdg" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.372456 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8nvcl" event={"ID":"ba5539a2-9e43-4f3b-8dbf-14d091e7b37d","Type":"ContainerDied","Data":"1e8c87341aebecdd2bef978b8ca2c38657e61f03b7ab2f698ca3cf54bc0e15ce"} Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.372491 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e8c87341aebecdd2bef978b8ca2c38657e61f03b7ab2f698ca3cf54bc0e15ce" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.372547 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8nvcl" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.374343 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zzv6s" event={"ID":"aa199068-fd5f-415d-82bd-32fd3b23a926","Type":"ContainerDied","Data":"e266d2eee357127e56af73fddbec382ed82ea692247dfcd193a2f807a6139386"} Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.374424 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e266d2eee357127e56af73fddbec382ed82ea692247dfcd193a2f807a6139386" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.374602 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zzv6s" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.376664 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hhqtm" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.376678 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hhqtm" event={"ID":"1526bd7f-501e-4add-b2c3-e1f6f803d3cf","Type":"ContainerDied","Data":"960c08303c5bf359ad4ad6c68d870c60dad0dfc7ec6b64f61c9112cfb2c84975"} Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.376710 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960c08303c5bf359ad4ad6c68d870c60dad0dfc7ec6b64f61c9112cfb2c84975" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.378192 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b47-account-create-update-tb2wz" event={"ID":"5acb458f-2080-4c36-86cd-d0e8004b9f9d","Type":"ContainerDied","Data":"0d7d43061b336b8f83aa56b012b703465afaa9a930fb0c3af69b2d062816d579"} Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.378245 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7d43061b336b8f83aa56b012b703465afaa9a930fb0c3af69b2d062816d579" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.378214 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b47-account-create-update-tb2wz" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.453992 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcl2\" (UniqueName: \"kubernetes.io/projected/aa199068-fd5f-415d-82bd-32fd3b23a926-kube-api-access-wtcl2\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:10 crc kubenswrapper[4922]: I0126 14:28:10.454026 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa199068-fd5f-415d-82bd-32fd3b23a926-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:11 crc kubenswrapper[4922]: I0126 14:28:11.307894 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:28:11 crc kubenswrapper[4922]: I0126 14:28:11.308273 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.681710 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.688369 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7466" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.692925 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.733681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21553358-b59a-4191-8376-e66491f5eadf-operator-scripts\") pod \"21553358-b59a-4191-8376-e66491f5eadf\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.733741 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twmjw\" (UniqueName: \"kubernetes.io/projected/21553358-b59a-4191-8376-e66491f5eadf-kube-api-access-twmjw\") pod \"21553358-b59a-4191-8376-e66491f5eadf\" (UID: \"21553358-b59a-4191-8376-e66491f5eadf\") " Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.733799 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-operator-scripts\") pod \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.733831 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6e159-08af-466b-ac1f-0faab319b9f5-operator-scripts\") pod \"e7b6e159-08af-466b-ac1f-0faab319b9f5\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.734055 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5shcf\" (UniqueName: \"kubernetes.io/projected/e7b6e159-08af-466b-ac1f-0faab319b9f5-kube-api-access-5shcf\") pod \"e7b6e159-08af-466b-ac1f-0faab319b9f5\" (UID: \"e7b6e159-08af-466b-ac1f-0faab319b9f5\") " Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.734102 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5299\" (UniqueName: \"kubernetes.io/projected/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-kube-api-access-s5299\") pod \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\" (UID: \"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705\") " Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.734565 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21553358-b59a-4191-8376-e66491f5eadf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21553358-b59a-4191-8376-e66491f5eadf" (UID: "21553358-b59a-4191-8376-e66491f5eadf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.735171 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" (UID: "d19ecea1-4e7a-4bd9-9a7f-dca95afbe705"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.735659 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6e159-08af-466b-ac1f-0faab319b9f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7b6e159-08af-466b-ac1f-0faab319b9f5" (UID: "e7b6e159-08af-466b-ac1f-0faab319b9f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.742822 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-kube-api-access-s5299" (OuterVolumeSpecName: "kube-api-access-s5299") pod "d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" (UID: "d19ecea1-4e7a-4bd9-9a7f-dca95afbe705"). InnerVolumeSpecName "kube-api-access-s5299". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.747411 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b6e159-08af-466b-ac1f-0faab319b9f5-kube-api-access-5shcf" (OuterVolumeSpecName: "kube-api-access-5shcf") pod "e7b6e159-08af-466b-ac1f-0faab319b9f5" (UID: "e7b6e159-08af-466b-ac1f-0faab319b9f5"). InnerVolumeSpecName "kube-api-access-5shcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.753418 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21553358-b59a-4191-8376-e66491f5eadf-kube-api-access-twmjw" (OuterVolumeSpecName: "kube-api-access-twmjw") pod "21553358-b59a-4191-8376-e66491f5eadf" (UID: "21553358-b59a-4191-8376-e66491f5eadf"). InnerVolumeSpecName "kube-api-access-twmjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.836695 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5shcf\" (UniqueName: \"kubernetes.io/projected/e7b6e159-08af-466b-ac1f-0faab319b9f5-kube-api-access-5shcf\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.836740 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5299\" (UniqueName: \"kubernetes.io/projected/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-kube-api-access-s5299\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.836750 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21553358-b59a-4191-8376-e66491f5eadf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.836763 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twmjw\" (UniqueName: \"kubernetes.io/projected/21553358-b59a-4191-8376-e66491f5eadf-kube-api-access-twmjw\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.836772 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:13 crc kubenswrapper[4922]: I0126 14:28:13.836783 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b6e159-08af-466b-ac1f-0faab319b9f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.430284 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7466" Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.430289 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7466" event={"ID":"e7b6e159-08af-466b-ac1f-0faab319b9f5","Type":"ContainerDied","Data":"9207bbebc2f8916f340f21995ba48c3c9c119634b4206312a781c515c33a8b4b"} Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.431034 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9207bbebc2f8916f340f21995ba48c3c9c119634b4206312a781c515c33a8b4b" Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.432989 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerStarted","Data":"447de18641794d9020adf36fe8231ecdd0acea76d06e6eaa05e9d5723c535a11"} Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.436223 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-643c-account-create-update-v5bs8" event={"ID":"d19ecea1-4e7a-4bd9-9a7f-dca95afbe705","Type":"ContainerDied","Data":"67f8bf45bbda36af93ffc837d75a53b5333ad5be9c46785b75b8c4845a1e5510"} Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.436282 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67f8bf45bbda36af93ffc837d75a53b5333ad5be9c46785b75b8c4845a1e5510" Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.436243 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-643c-account-create-update-v5bs8" Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.437814 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ea3-account-create-update-6vlln" event={"ID":"21553358-b59a-4191-8376-e66491f5eadf","Type":"ContainerDied","Data":"2fe1685f36784ac8ca3ab0fd903384a18f79c755b25fa827ff73a9870f2717cd"} Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.437852 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fe1685f36784ac8ca3ab0fd903384a18f79c755b25fa827ff73a9870f2717cd" Jan 26 14:28:14 crc kubenswrapper[4922]: I0126 14:28:14.437856 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ea3-account-create-update-6vlln" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.997929 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-lt6mt"] Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998626 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1526bd7f-501e-4add-b2c3-e1f6f803d3cf" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998645 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1526bd7f-501e-4add-b2c3-e1f6f803d3cf" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998668 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21553358-b59a-4191-8376-e66491f5eadf" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998676 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="21553358-b59a-4191-8376-e66491f5eadf" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998687 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6e159-08af-466b-ac1f-0faab319b9f5" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998695 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6e159-08af-466b-ac1f-0faab319b9f5" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998708 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5acb458f-2080-4c36-86cd-d0e8004b9f9d" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998714 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5acb458f-2080-4c36-86cd-d0e8004b9f9d" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998731 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c573fb3-9fba-47ec-8951-b069561ed90e" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998739 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c573fb3-9fba-47ec-8951-b069561ed90e" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998756 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa199068-fd5f-415d-82bd-32fd3b23a926" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998764 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa199068-fd5f-415d-82bd-32fd3b23a926" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998778 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.998785 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: E0126 14:28:16.998796 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999097 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999324 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa199068-fd5f-415d-82bd-32fd3b23a926" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999343 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c573fb3-9fba-47ec-8951-b069561ed90e" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999353 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999362 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1526bd7f-501e-4add-b2c3-e1f6f803d3cf" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999372 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999386 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6e159-08af-466b-ac1f-0faab319b9f5" containerName="mariadb-database-create" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999395 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="21553358-b59a-4191-8376-e66491f5eadf" containerName="mariadb-account-create-update" Jan 26 14:28:16 crc kubenswrapper[4922]: I0126 14:28:16.999405 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5acb458f-2080-4c36-86cd-d0e8004b9f9d" containerName="mariadb-account-create-update" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.000128 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.003275 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.003444 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sxfb2" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.019106 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lt6mt"] Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.109968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-combined-ca-bundle\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.110184 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-config-data\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.110243 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-db-sync-config-data\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.110357 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mnb\" (UniqueName: \"kubernetes.io/projected/99c8b640-ac97-4a3e-8e4c-1781bd756396-kube-api-access-64mnb\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.211306 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-config-data\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.211423 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-db-sync-config-data\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.211549 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mnb\" (UniqueName: \"kubernetes.io/projected/99c8b640-ac97-4a3e-8e4c-1781bd756396-kube-api-access-64mnb\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.211691 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-combined-ca-bundle\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.216703 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-db-sync-config-data\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.216775 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-combined-ca-bundle\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.226640 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-config-data\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.226982 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mnb\" (UniqueName: \"kubernetes.io/projected/99c8b640-ac97-4a3e-8e4c-1781bd756396-kube-api-access-64mnb\") pod \"glance-db-sync-lt6mt\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:17 crc kubenswrapper[4922]: I0126 14:28:17.325523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt6mt" Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.491227 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-6mb2d" event={"ID":"eb180569-7ff8-4908-8bdd-66b681f030df","Type":"ContainerStarted","Data":"3f4700bbe7f36afeab23ccec4d54884125080ed72ca9814046aae91364cd7396"} Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.495125 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l2gr2" event={"ID":"5060ea35-5cb6-4f74-8f86-ec622a9c83d4","Type":"ContainerStarted","Data":"2185c1ed9b133570c591250d5d2dc2793289d160b542a91be580eb0508dc5e50"} Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.502146 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerStarted","Data":"6b81fcbf60f0aba8269d13051e5768e42e5c13475dd6d49f6087f84729ec1fcd"} Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.522137 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-6mb2d" podStartSLOduration=2.6039462049999997 podStartE2EDuration="13.522109837s" podCreationTimestamp="2026-01-26 14:28:06 +0000 UTC" firstStartedPulling="2026-01-26 14:28:07.801652762 +0000 UTC m=+1105.003915534" lastFinishedPulling="2026-01-26 14:28:18.719816374 +0000 UTC m=+1115.922079166" observedRunningTime="2026-01-26 14:28:19.517574999 +0000 UTC m=+1116.719837781" watchObservedRunningTime="2026-01-26 14:28:19.522109837 +0000 UTC m=+1116.724372639" Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.550643 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-l2gr2" podStartSLOduration=2.25426808 podStartE2EDuration="15.55062161s" podCreationTimestamp="2026-01-26 14:28:04 +0000 UTC" firstStartedPulling="2026-01-26 14:28:05.398898832 +0000 UTC m=+1102.601161604" lastFinishedPulling="2026-01-26 14:28:18.695252362 +0000 UTC m=+1115.897515134" observedRunningTime="2026-01-26 14:28:19.544402105 +0000 UTC m=+1116.746664887" watchObservedRunningTime="2026-01-26 14:28:19.55062161 +0000 UTC m=+1116.752884392" Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.600113 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.600089453 podStartE2EDuration="24.600089453s" podCreationTimestamp="2026-01-26 14:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:19.594801074 +0000 UTC m=+1116.797063866" watchObservedRunningTime="2026-01-26 14:28:19.600089453 +0000 UTC m=+1116.802352235" Jan 26 14:28:19 crc kubenswrapper[4922]: I0126 14:28:19.600238 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lt6mt"] Jan 26 14:28:19 crc kubenswrapper[4922]: W0126 14:28:19.607932 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c8b640_ac97_4a3e_8e4c_1781bd756396.slice/crio-5602197bdf278ba79ffee25c5deb128f8e661811a33c76ff52a6d28649e97fc7 WatchSource:0}: Error finding container 5602197bdf278ba79ffee25c5deb128f8e661811a33c76ff52a6d28649e97fc7: Status 404 returned error can't find the container with id 5602197bdf278ba79ffee25c5deb128f8e661811a33c76ff52a6d28649e97fc7 Jan 26 14:28:20 crc kubenswrapper[4922]: I0126 14:28:20.509826 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt6mt" event={"ID":"99c8b640-ac97-4a3e-8e4c-1781bd756396","Type":"ContainerStarted","Data":"5602197bdf278ba79ffee25c5deb128f8e661811a33c76ff52a6d28649e97fc7"} Jan 26 14:28:20 crc kubenswrapper[4922]: I0126 14:28:20.518242 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 14:28:24 crc kubenswrapper[4922]: I0126 14:28:24.159831 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:28:24 crc kubenswrapper[4922]: I0126 14:28:24.170676 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03d225b5-5466-45de-9417-54a11fa79429-etc-swift\") pod \"swift-storage-0\" (UID: \"03d225b5-5466-45de-9417-54a11fa79429\") " pod="openstack/swift-storage-0" Jan 26 14:28:24 crc kubenswrapper[4922]: I0126 14:28:24.173126 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 14:28:24 crc kubenswrapper[4922]: I0126 14:28:24.844246 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 14:28:24 crc kubenswrapper[4922]: W0126 14:28:24.853478 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03d225b5_5466_45de_9417_54a11fa79429.slice/crio-20193fdcc63fdc8628961269cd0529e62ef657433f56c3b928ba4e74003fb2c4 WatchSource:0}: Error finding container 20193fdcc63fdc8628961269cd0529e62ef657433f56c3b928ba4e74003fb2c4: Status 404 returned error can't find the container with id 20193fdcc63fdc8628961269cd0529e62ef657433f56c3b928ba4e74003fb2c4 Jan 26 14:28:25 crc kubenswrapper[4922]: I0126 14:28:25.517942 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 14:28:25 crc kubenswrapper[4922]: I0126 14:28:25.531464 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 14:28:25 crc kubenswrapper[4922]: I0126 14:28:25.568970 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"20193fdcc63fdc8628961269cd0529e62ef657433f56c3b928ba4e74003fb2c4"} Jan 26 14:28:25 crc kubenswrapper[4922]: I0126 14:28:25.577559 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 14:28:28 crc kubenswrapper[4922]: I0126 14:28:28.591077 4922 generic.go:334] "Generic (PLEG): container finished" podID="eb180569-7ff8-4908-8bdd-66b681f030df" containerID="3f4700bbe7f36afeab23ccec4d54884125080ed72ca9814046aae91364cd7396" exitCode=0 Jan 26 14:28:28 crc kubenswrapper[4922]: I0126 14:28:28.591170 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-6mb2d" event={"ID":"eb180569-7ff8-4908-8bdd-66b681f030df","Type":"ContainerDied","Data":"3f4700bbe7f36afeab23ccec4d54884125080ed72ca9814046aae91364cd7396"} Jan 26 14:28:28 crc kubenswrapper[4922]: I0126 14:28:28.605715 4922 generic.go:334] "Generic (PLEG): container finished" podID="5060ea35-5cb6-4f74-8f86-ec622a9c83d4" containerID="2185c1ed9b133570c591250d5d2dc2793289d160b542a91be580eb0508dc5e50" exitCode=0 Jan 26 14:28:28 crc kubenswrapper[4922]: I0126 14:28:28.605753 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l2gr2" event={"ID":"5060ea35-5cb6-4f74-8f86-ec622a9c83d4","Type":"ContainerDied","Data":"2185c1ed9b133570c591250d5d2dc2793289d160b542a91be580eb0508dc5e50"} Jan 26 14:28:35 crc kubenswrapper[4922]: E0126 14:28:35.649540 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 14:28:35 crc kubenswrapper[4922]: E0126 14:28:35.649925 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 14:28:35 crc kubenswrapper[4922]: E0126 14:28:35.650085 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64mnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-lt6mt_openstack(99c8b640-ac97-4a3e-8e4c-1781bd756396): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:28:35 crc kubenswrapper[4922]: E0126 14:28:35.651454 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-lt6mt" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.670727 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-6mb2d" event={"ID":"eb180569-7ff8-4908-8bdd-66b681f030df","Type":"ContainerDied","Data":"b3f2c014d975659f9da57e701d5374987284ac3632cfba5e376377e9cf190a21"} Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.670788 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f2c014d975659f9da57e701d5374987284ac3632cfba5e376377e9cf190a21" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.673208 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l2gr2" event={"ID":"5060ea35-5cb6-4f74-8f86-ec622a9c83d4","Type":"ContainerDied","Data":"6df7077ff259f308df1e20014803ada0c05842d35575d5a5343be7446da5da62"} Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.673260 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df7077ff259f308df1e20014803ada0c05842d35575d5a5343be7446da5da62" Jan 26 14:28:35 crc kubenswrapper[4922]: E0126 14:28:35.674452 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-lt6mt" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.718642 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.728190 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.892865 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjnc9\" (UniqueName: \"kubernetes.io/projected/eb180569-7ff8-4908-8bdd-66b681f030df-kube-api-access-bjnc9\") pod \"eb180569-7ff8-4908-8bdd-66b681f030df\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.892929 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-combined-ca-bundle\") pod \"eb180569-7ff8-4908-8bdd-66b681f030df\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.892963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-db-sync-config-data\") pod \"eb180569-7ff8-4908-8bdd-66b681f030df\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.892992 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-combined-ca-bundle\") pod \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.893052 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-config-data\") pod \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.893128 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-config-data\") pod \"eb180569-7ff8-4908-8bdd-66b681f030df\" (UID: \"eb180569-7ff8-4908-8bdd-66b681f030df\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.893156 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5nbk\" (UniqueName: \"kubernetes.io/projected/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-kube-api-access-w5nbk\") pod \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\" (UID: \"5060ea35-5cb6-4f74-8f86-ec622a9c83d4\") " Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.901181 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb180569-7ff8-4908-8bdd-66b681f030df" (UID: "eb180569-7ff8-4908-8bdd-66b681f030df"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.903431 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-kube-api-access-w5nbk" (OuterVolumeSpecName: "kube-api-access-w5nbk") pod "5060ea35-5cb6-4f74-8f86-ec622a9c83d4" (UID: "5060ea35-5cb6-4f74-8f86-ec622a9c83d4"). InnerVolumeSpecName "kube-api-access-w5nbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.903801 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb180569-7ff8-4908-8bdd-66b681f030df-kube-api-access-bjnc9" (OuterVolumeSpecName: "kube-api-access-bjnc9") pod "eb180569-7ff8-4908-8bdd-66b681f030df" (UID: "eb180569-7ff8-4908-8bdd-66b681f030df"). InnerVolumeSpecName "kube-api-access-bjnc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.925498 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5060ea35-5cb6-4f74-8f86-ec622a9c83d4" (UID: "5060ea35-5cb6-4f74-8f86-ec622a9c83d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.925833 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb180569-7ff8-4908-8bdd-66b681f030df" (UID: "eb180569-7ff8-4908-8bdd-66b681f030df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.944820 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-config-data" (OuterVolumeSpecName: "config-data") pod "eb180569-7ff8-4908-8bdd-66b681f030df" (UID: "eb180569-7ff8-4908-8bdd-66b681f030df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.951884 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-config-data" (OuterVolumeSpecName: "config-data") pod "5060ea35-5cb6-4f74-8f86-ec622a9c83d4" (UID: "5060ea35-5cb6-4f74-8f86-ec622a9c83d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994685 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjnc9\" (UniqueName: \"kubernetes.io/projected/eb180569-7ff8-4908-8bdd-66b681f030df-kube-api-access-bjnc9\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994717 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994730 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994742 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994755 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994766 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb180569-7ff8-4908-8bdd-66b681f030df-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:35 crc kubenswrapper[4922]: I0126 14:28:35.994779 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5nbk\" (UniqueName: \"kubernetes.io/projected/5060ea35-5cb6-4f74-8f86-ec622a9c83d4-kube-api-access-w5nbk\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:36 crc kubenswrapper[4922]: I0126 14:28:36.688736 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l2gr2" Jan 26 14:28:36 crc kubenswrapper[4922]: I0126 14:28:36.690214 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"fe3a829ada4d3a48e077705171deae18394483a4a3808e280b7177eac23b8a98"} Jan 26 14:28:36 crc kubenswrapper[4922]: I0126 14:28:36.690449 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-6mb2d" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.045319 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df6655b9f-l7lqx"] Jan 26 14:28:37 crc kubenswrapper[4922]: E0126 14:28:37.046110 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5060ea35-5cb6-4f74-8f86-ec622a9c83d4" containerName="keystone-db-sync" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.046129 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5060ea35-5cb6-4f74-8f86-ec622a9c83d4" containerName="keystone-db-sync" Jan 26 14:28:37 crc kubenswrapper[4922]: E0126 14:28:37.046156 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb180569-7ff8-4908-8bdd-66b681f030df" containerName="watcher-db-sync" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.046164 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb180569-7ff8-4908-8bdd-66b681f030df" containerName="watcher-db-sync" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.046358 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5060ea35-5cb6-4f74-8f86-ec622a9c83d4" containerName="keystone-db-sync" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.046379 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb180569-7ff8-4908-8bdd-66b681f030df" containerName="watcher-db-sync" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.047516 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.069550 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df6655b9f-l7lqx"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.119842 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dg49w"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.120890 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.126386 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.126561 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j5sp8" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.126600 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.126645 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.126753 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.160096 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dg49w"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220381 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-nb\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220442 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-config\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220478 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnqc\" (UniqueName: \"kubernetes.io/projected/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-kube-api-access-8bnqc\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220518 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-sb\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220534 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-scripts\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220551 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt7q6\" (UniqueName: \"kubernetes.io/projected/16c33977-a379-4ffa-adda-234d9076c2dc-kube-api-access-xt7q6\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220568 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-combined-ca-bundle\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220596 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-config-data\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220622 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-fernet-keys\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220680 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-dns-svc\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.220697 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-credential-keys\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.322858 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-fernet-keys\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.322936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-dns-svc\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323160 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-credential-keys\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323301 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-nb\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323357 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-config\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323404 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnqc\" (UniqueName: \"kubernetes.io/projected/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-kube-api-access-8bnqc\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323462 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-sb\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323480 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-scripts\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323504 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt7q6\" (UniqueName: \"kubernetes.io/projected/16c33977-a379-4ffa-adda-234d9076c2dc-kube-api-access-xt7q6\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323524 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-combined-ca-bundle\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323546 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-config-data\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.323756 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-dns-svc\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.332492 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-sb\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.333202 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-nb\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.333808 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-config\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.335416 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.340788 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-combined-ca-bundle\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.341144 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-credential-keys\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.341733 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-config-data\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.344636 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-fernet-keys\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.352261 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.360584 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt7q6\" (UniqueName: \"kubernetes.io/projected/16c33977-a379-4ffa-adda-234d9076c2dc-kube-api-access-xt7q6\") pod \"dnsmasq-dns-df6655b9f-l7lqx\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.362968 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnqc\" (UniqueName: \"kubernetes.io/projected/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-kube-api-access-8bnqc\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.365576 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.365722 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-qs5db" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.366385 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-scripts\") pod \"keystone-bootstrap-dg49w\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.367882 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.369218 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.401505 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.402839 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.429741 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.439451 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440148 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440229 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fb34a1-8ac1-464e-964c-c497603ff11f-logs\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440331 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440429 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440537 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ec42e5-bf03-41f3-93cf-e18347511ed0-logs\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440634 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgqx\" (UniqueName: \"kubernetes.io/projected/e0fb34a1-8ac1-464e-964c-c497603ff11f-kube-api-access-glgqx\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440746 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5s4\" (UniqueName: \"kubernetes.io/projected/41ec42e5-bf03-41f3-93cf-e18347511ed0-kube-api-access-fl5s4\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-config-data\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.440887 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.439870 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.455426 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.533531 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rvk6w"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.534662 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.558223 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sqh6v" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.558515 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.558681 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579545 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl5s4\" (UniqueName: \"kubernetes.io/projected/41ec42e5-bf03-41f3-93cf-e18347511ed0-kube-api-access-fl5s4\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579595 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-db-sync-config-data\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579638 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-config-data\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579665 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579686 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91754680-73d8-4c72-a7bd-834959e192a1-etc-machine-id\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579770 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-scripts\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579895 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579923 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68hz\" (UniqueName: \"kubernetes.io/projected/91754680-73d8-4c72-a7bd-834959e192a1-kube-api-access-x68hz\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.579967 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.580016 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fb34a1-8ac1-464e-964c-c497603ff11f-logs\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584513 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fb34a1-8ac1-464e-964c-c497603ff11f-logs\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584586 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-config-data\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584656 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-combined-ca-bundle\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584710 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584764 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ec42e5-bf03-41f3-93cf-e18347511ed0-logs\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.584882 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgqx\" (UniqueName: \"kubernetes.io/projected/e0fb34a1-8ac1-464e-964c-c497603ff11f-kube-api-access-glgqx\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.585587 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-config-data\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.588408 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.588607 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ec42e5-bf03-41f3-93cf-e18347511ed0-logs\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.589110 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.596231 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.596634 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.598635 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.628916 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl5s4\" (UniqueName: \"kubernetes.io/projected/41ec42e5-bf03-41f3-93cf-e18347511ed0-kube-api-access-fl5s4\") pod \"watcher-decision-engine-0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.634005 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-44cnx"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.637921 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgqx\" (UniqueName: \"kubernetes.io/projected/e0fb34a1-8ac1-464e-964c-c497603ff11f-kube-api-access-glgqx\") pod \"watcher-api-0\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.671936 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.679946 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.680163 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.680296 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9pv62" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.687261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-db-sync-config-data\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.687314 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91754680-73d8-4c72-a7bd-834959e192a1-etc-machine-id\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.687349 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-scripts\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.687395 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68hz\" (UniqueName: \"kubernetes.io/projected/91754680-73d8-4c72-a7bd-834959e192a1-kube-api-access-x68hz\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.687419 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-config-data\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.687437 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-combined-ca-bundle\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.693797 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91754680-73d8-4c72-a7bd-834959e192a1-etc-machine-id\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.693854 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.695213 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.699928 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-db-sync-config-data\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.701853 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-config-data\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.715664 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.716937 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvk6w"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.717321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-scripts\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.725588 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-combined-ca-bundle\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.733535 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.754864 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68hz\" (UniqueName: \"kubernetes.io/projected/91754680-73d8-4c72-a7bd-834959e192a1-kube-api-access-x68hz\") pod \"cinder-db-sync-rvk6w\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.756147 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"6d7a32bcd4c585882ca88f4ea18f876b4dbd72bbf4461b3471202b0f62726bad"} Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.756179 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"40934f88bdea2bb4349e554538cf3116769826162efc5ad6182306be588193aa"} Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.764481 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.776197 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c565fcfc7-gdkrq"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.787399 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.789140 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.802241 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-config-data\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.802327 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7784db-1198-4bd4-bed0-da049559613b-logs\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.802346 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdqx9\" (UniqueName: \"kubernetes.io/projected/281c4d86-0cfa-4637-9106-2099e20add9a-kube-api-access-kdqx9\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.802548 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fp2q\" (UniqueName: \"kubernetes.io/projected/3b7784db-1198-4bd4-bed0-da049559613b-kube-api-access-6fp2q\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.802601 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-combined-ca-bundle\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.802637 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-config\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.840326 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.840375 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.840480 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.840527 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-cg6nf" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.847318 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.858025 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-44cnx"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.895290 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904099 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fp2q\" (UniqueName: \"kubernetes.io/projected/3b7784db-1198-4bd4-bed0-da049559613b-kube-api-access-6fp2q\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904136 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-combined-ca-bundle\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904161 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-config\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904209 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-config-data\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904228 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904251 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgfdw\" (UniqueName: \"kubernetes.io/projected/5df15689-477e-4c5b-a42f-f43a103f7a2e-kube-api-access-lgfdw\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904276 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-config-data\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904309 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7784db-1198-4bd4-bed0-da049559613b-logs\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904324 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdqx9\" (UniqueName: \"kubernetes.io/projected/281c4d86-0cfa-4637-9106-2099e20add9a-kube-api-access-kdqx9\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904357 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df15689-477e-4c5b-a42f-f43a103f7a2e-logs\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904374 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-scripts\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.904393 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5df15689-477e-4c5b-a42f-f43a103f7a2e-horizon-secret-key\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.914248 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7784db-1198-4bd4-bed0-da049559613b-logs\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.915198 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c565fcfc7-gdkrq"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.916084 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-combined-ca-bundle\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.916564 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.920960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-config\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.925711 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-config-data\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.945102 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.945605 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdqx9\" (UniqueName: \"kubernetes.io/projected/281c4d86-0cfa-4637-9106-2099e20add9a-kube-api-access-kdqx9\") pod \"neutron-db-sync-44cnx\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.947236 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.949709 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.952648 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.953569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fp2q\" (UniqueName: \"kubernetes.io/projected/3b7784db-1198-4bd4-bed0-da049559613b-kube-api-access-6fp2q\") pod \"watcher-applier-0\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " pod="openstack/watcher-applier-0" Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.979004 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6655b9f-l7lqx"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.990168 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fb94c898f-56t8m"] Jan 26 14:28:37 crc kubenswrapper[4922]: I0126 14:28:37.992607 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.000998 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006075 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006109 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-log-httpd\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006134 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006154 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-config-data\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006177 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-scripts\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006200 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntrs9\" (UniqueName: \"kubernetes.io/projected/4483d7ac-397e-4220-82f3-c6832fe69c2e-kube-api-access-ntrs9\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006222 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgfdw\" (UniqueName: \"kubernetes.io/projected/5df15689-477e-4c5b-a42f-f43a103f7a2e-kube-api-access-lgfdw\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006251 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-config-data\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006284 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-run-httpd\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006325 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-scripts\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006341 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df15689-477e-4c5b-a42f-f43a103f7a2e-logs\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.006362 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5df15689-477e-4c5b-a42f-f43a103f7a2e-horizon-secret-key\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.009171 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb94c898f-56t8m"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.010978 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-config-data\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.012027 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-scripts\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.012289 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df15689-477e-4c5b-a42f-f43a103f7a2e-logs\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.017839 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5df15689-477e-4c5b-a42f-f43a103f7a2e-horizon-secret-key\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.027247 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nzdsh"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.028991 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.031952 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9cr42" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.034345 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xdqj7"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.035370 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.035763 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.041606 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.041779 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2wlz5" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.041909 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.043681 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgfdw\" (UniqueName: \"kubernetes.io/projected/5df15689-477e-4c5b-a42f-f43a103f7a2e-kube-api-access-lgfdw\") pod \"horizon-6c565fcfc7-gdkrq\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.051132 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nzdsh"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.061596 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4f48c7-pfcfp"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.063147 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.083644 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xdqj7"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.104109 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4f48c7-pfcfp"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.107935 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-config\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.107994 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-config-data\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108017 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-db-sync-config-data\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108052 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108102 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108121 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdfm\" (UniqueName: \"kubernetes.io/projected/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-kube-api-access-6zdfm\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108140 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-logs\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108156 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-log-httpd\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108185 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108204 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-scripts\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108224 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntrs9\" (UniqueName: \"kubernetes.io/projected/4483d7ac-397e-4220-82f3-c6832fe69c2e-kube-api-access-ntrs9\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108242 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-combined-ca-bundle\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108258 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af68ec3-0690-4a73-9f25-3a50652fbe34-logs\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-config-data\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108297 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6g8d\" (UniqueName: \"kubernetes.io/projected/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-kube-api-access-l6g8d\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108536 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzxhf\" (UniqueName: \"kubernetes.io/projected/64dc8567-a56e-4cf4-8155-5b06c405f7ba-kube-api-access-pzxhf\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108567 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108581 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-config-data\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108604 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9af68ec3-0690-4a73-9f25-3a50652fbe34-horizon-secret-key\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108628 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-run-httpd\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-scripts\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108681 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bt8\" (UniqueName: \"kubernetes.io/projected/9af68ec3-0690-4a73-9f25-3a50652fbe34-kube-api-access-49bt8\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108709 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-dns-svc\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108729 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-combined-ca-bundle\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.108747 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-scripts\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.112296 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-run-httpd\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.112680 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.113683 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-log-httpd\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.117572 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-scripts\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.121355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-config-data\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.125049 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.180505 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44cnx" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.181982 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntrs9\" (UniqueName: \"kubernetes.io/projected/4483d7ac-397e-4220-82f3-c6832fe69c2e-kube-api-access-ntrs9\") pod \"ceilometer-0\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.193003 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.215233 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-config\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218325 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-config-data\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218367 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-db-sync-config-data\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218444 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218471 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdfm\" (UniqueName: \"kubernetes.io/projected/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-kube-api-access-6zdfm\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218510 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-logs\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218607 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-combined-ca-bundle\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218634 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af68ec3-0690-4a73-9f25-3a50652fbe34-logs\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218682 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6g8d\" (UniqueName: \"kubernetes.io/projected/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-kube-api-access-l6g8d\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218705 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzxhf\" (UniqueName: \"kubernetes.io/projected/64dc8567-a56e-4cf4-8155-5b06c405f7ba-kube-api-access-pzxhf\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218734 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218756 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-config-data\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218798 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9af68ec3-0690-4a73-9f25-3a50652fbe34-horizon-secret-key\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218885 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-scripts\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218917 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bt8\" (UniqueName: \"kubernetes.io/projected/9af68ec3-0690-4a73-9f25-3a50652fbe34-kube-api-access-49bt8\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.218986 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-dns-svc\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.219018 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-combined-ca-bundle\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.219052 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-scripts\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.224574 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-scripts\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.225414 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-logs\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.225703 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.217301 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-config\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.229098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af68ec3-0690-4a73-9f25-3a50652fbe34-logs\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.232205 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-db-sync-config-data\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.233527 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.234451 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-combined-ca-bundle\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.235606 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-dns-svc\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.236234 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-config-data\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.242830 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-scripts\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.244138 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.245603 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-combined-ca-bundle\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.246708 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-config-data\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.248145 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9af68ec3-0690-4a73-9f25-3a50652fbe34-horizon-secret-key\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.249766 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzxhf\" (UniqueName: \"kubernetes.io/projected/64dc8567-a56e-4cf4-8155-5b06c405f7ba-kube-api-access-pzxhf\") pod \"barbican-db-sync-nzdsh\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.251342 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6g8d\" (UniqueName: \"kubernetes.io/projected/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-kube-api-access-l6g8d\") pod \"placement-db-sync-xdqj7\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.252561 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bt8\" (UniqueName: \"kubernetes.io/projected/9af68ec3-0690-4a73-9f25-3a50652fbe34-kube-api-access-49bt8\") pod \"horizon-5fb94c898f-56t8m\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.264534 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdfm\" (UniqueName: \"kubernetes.io/projected/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-kube-api-access-6zdfm\") pod \"dnsmasq-dns-795f4f48c7-pfcfp\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.321684 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.340950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.359394 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6655b9f-l7lqx"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.375503 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.391995 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xdqj7" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.413041 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.488932 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dg49w"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.690130 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.697440 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.803861 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" event={"ID":"16c33977-a379-4ffa-adda-234d9076c2dc","Type":"ContainerStarted","Data":"f60050ef0f9970af808256b07e4c8a109f2540e534d81f875701ce980cbab78d"} Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.806958 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e0fb34a1-8ac1-464e-964c-c497603ff11f","Type":"ContainerStarted","Data":"f3403b84c03f9d59baa935e1a87aa58ad64b41b0eb9abec5ae474da0a5cf4557"} Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.808411 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dg49w" event={"ID":"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d","Type":"ContainerStarted","Data":"27fa0066c5a2b95b5c2e130010591e823f38829927f1a2e7541cc84eb4223986"} Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.810846 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"f4aafb9ec3f6040d26e4a22251c2d4468fb55692920dc1bd3bab21880fcf85f7"} Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.828829 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"41ec42e5-bf03-41f3-93cf-e18347511ed0","Type":"ContainerStarted","Data":"405bd8a5749fd02b1737ce22270c1ed426e4db152a57b2cd6caf0ef3c95ac7c6"} Jan 26 14:28:38 crc kubenswrapper[4922]: I0126 14:28:38.886724 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvk6w"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.136282 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-44cnx"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.141322 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb94c898f-56t8m"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.154438 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:28:39 crc kubenswrapper[4922]: W0126 14:28:39.159614 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af68ec3_0690_4a73_9f25_3a50652fbe34.slice/crio-85be9bd67def3ac03889653fcf297f6f25a3513e8be9656c994092e1afc8d6d7 WatchSource:0}: Error finding container 85be9bd67def3ac03889653fcf297f6f25a3513e8be9656c994092e1afc8d6d7: Status 404 returned error can't find the container with id 85be9bd67def3ac03889653fcf297f6f25a3513e8be9656c994092e1afc8d6d7 Jan 26 14:28:39 crc kubenswrapper[4922]: W0126 14:28:39.171663 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod281c4d86_0cfa_4637_9106_2099e20add9a.slice/crio-f24353b9ab22cba890a4dfa5dbf770f418d3c94bd461190048dd7bffa1c65c2d WatchSource:0}: Error finding container f24353b9ab22cba890a4dfa5dbf770f418d3c94bd461190048dd7bffa1c65c2d: Status 404 returned error can't find the container with id f24353b9ab22cba890a4dfa5dbf770f418d3c94bd461190048dd7bffa1c65c2d Jan 26 14:28:39 crc kubenswrapper[4922]: W0126 14:28:39.184871 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b7784db_1198_4bd4_bed0_da049559613b.slice/crio-77d287c8fd02e126d5db68498dccd68ac34698520dcb57d967466851ef6adc47 WatchSource:0}: Error finding container 77d287c8fd02e126d5db68498dccd68ac34698520dcb57d967466851ef6adc47: Status 404 returned error can't find the container with id 77d287c8fd02e126d5db68498dccd68ac34698520dcb57d967466851ef6adc47 Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.269888 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nzdsh"] Jan 26 14:28:39 crc kubenswrapper[4922]: W0126 14:28:39.270177 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64dc8567_a56e_4cf4_8155_5b06c405f7ba.slice/crio-94c6a58e0d7f55fe6a4d18fb19488a7bd43f0b839bf7e8e05cb53d211f54f8a6 WatchSource:0}: Error finding container 94c6a58e0d7f55fe6a4d18fb19488a7bd43f0b839bf7e8e05cb53d211f54f8a6: Status 404 returned error can't find the container with id 94c6a58e0d7f55fe6a4d18fb19488a7bd43f0b839bf7e8e05cb53d211f54f8a6 Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.424087 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4f48c7-pfcfp"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.439210 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.451922 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c565fcfc7-gdkrq"] Jan 26 14:28:39 crc kubenswrapper[4922]: W0126 14:28:39.477726 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4483d7ac_397e_4220_82f3_c6832fe69c2e.slice/crio-793457ac10eebc77eeeb9d8cc72bf66d8ee04144762a5123d9fddd64ee661e42 WatchSource:0}: Error finding container 793457ac10eebc77eeeb9d8cc72bf66d8ee04144762a5123d9fddd64ee661e42: Status 404 returned error can't find the container with id 793457ac10eebc77eeeb9d8cc72bf66d8ee04144762a5123d9fddd64ee661e42 Jan 26 14:28:39 crc kubenswrapper[4922]: W0126 14:28:39.486435 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df15689_477e_4c5b_a42f_f43a103f7a2e.slice/crio-07531e6f72e5610c361282259b9ddc888371c9a3e3536c8fbe71afdc690b0374 WatchSource:0}: Error finding container 07531e6f72e5610c361282259b9ddc888371c9a3e3536c8fbe71afdc690b0374: Status 404 returned error can't find the container with id 07531e6f72e5610c361282259b9ddc888371c9a3e3536c8fbe71afdc690b0374 Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.554920 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.598017 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fb94c898f-56t8m"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.661126 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-865758bb69-6rftr"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.662782 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.690789 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-865758bb69-6rftr"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.708234 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xdqj7"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.739197 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.758710 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-config-data\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.758774 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b639ab62-28b8-4871-84d0-f28774600973-horizon-secret-key\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.758802 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrp4\" (UniqueName: \"kubernetes.io/projected/b639ab62-28b8-4871-84d0-f28774600973-kube-api-access-kxrp4\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.758823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b639ab62-28b8-4871-84d0-f28774600973-logs\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.758942 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-scripts\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.846990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4483d7ac-397e-4220-82f3-c6832fe69c2e","Type":"ContainerStarted","Data":"793457ac10eebc77eeeb9d8cc72bf66d8ee04144762a5123d9fddd64ee661e42"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.849586 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c565fcfc7-gdkrq" event={"ID":"5df15689-477e-4c5b-a42f-f43a103f7a2e","Type":"ContainerStarted","Data":"07531e6f72e5610c361282259b9ddc888371c9a3e3536c8fbe71afdc690b0374"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.855189 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xdqj7" event={"ID":"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9","Type":"ContainerStarted","Data":"247ce08d365649e22911febfe57c365bee476d703aed8113693a40675504a2ba"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.857045 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nzdsh" event={"ID":"64dc8567-a56e-4cf4-8155-5b06c405f7ba","Type":"ContainerStarted","Data":"94c6a58e0d7f55fe6a4d18fb19488a7bd43f0b839bf7e8e05cb53d211f54f8a6"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.858020 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvk6w" event={"ID":"91754680-73d8-4c72-a7bd-834959e192a1","Type":"ContainerStarted","Data":"402249eb3c7eb85afb7051d3325eab7b11c1b74773ee1226af39e96a94176e51"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.860095 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-config-data\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.860154 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b639ab62-28b8-4871-84d0-f28774600973-horizon-secret-key\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.860173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrp4\" (UniqueName: \"kubernetes.io/projected/b639ab62-28b8-4871-84d0-f28774600973-kube-api-access-kxrp4\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.860193 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b639ab62-28b8-4871-84d0-f28774600973-logs\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.860272 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-scripts\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.860724 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b639ab62-28b8-4871-84d0-f28774600973-logs\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.861215 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-scripts\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.862091 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-config-data\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.866422 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b639ab62-28b8-4871-84d0-f28774600973-horizon-secret-key\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.869970 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44cnx" event={"ID":"281c4d86-0cfa-4637-9106-2099e20add9a","Type":"ContainerStarted","Data":"392dfe958b3ed5aee9c1d4a1b60e37539a9c063ac7641d6f76d3547f769fedc1"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.869999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44cnx" event={"ID":"281c4d86-0cfa-4637-9106-2099e20add9a","Type":"ContainerStarted","Data":"f24353b9ab22cba890a4dfa5dbf770f418d3c94bd461190048dd7bffa1c65c2d"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.874570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" event={"ID":"9d9d5644-40b7-49a4-9d01-6158dcb79c3d","Type":"ContainerStarted","Data":"9ffd81f1537f366df0bb7e7e055c9f35af41a07bc5a64c2b3dfe0e3771608e53"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.883644 4922 generic.go:334] "Generic (PLEG): container finished" podID="16c33977-a379-4ffa-adda-234d9076c2dc" containerID="af7c41878719da07d9733ce515b5dd39b64e73a22ab4a5f78a70318de3dbe28c" exitCode=0 Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.883743 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" event={"ID":"16c33977-a379-4ffa-adda-234d9076c2dc","Type":"ContainerDied","Data":"af7c41878719da07d9733ce515b5dd39b64e73a22ab4a5f78a70318de3dbe28c"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.888617 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrp4\" (UniqueName: \"kubernetes.io/projected/b639ab62-28b8-4871-84d0-f28774600973-kube-api-access-kxrp4\") pod \"horizon-865758bb69-6rftr\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.889964 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-44cnx" podStartSLOduration=2.889946948 podStartE2EDuration="2.889946948s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:39.884143475 +0000 UTC m=+1137.086406247" watchObservedRunningTime="2026-01-26 14:28:39.889946948 +0000 UTC m=+1137.092209720" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.890619 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb94c898f-56t8m" event={"ID":"9af68ec3-0690-4a73-9f25-3a50652fbe34","Type":"ContainerStarted","Data":"85be9bd67def3ac03889653fcf297f6f25a3513e8be9656c994092e1afc8d6d7"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.893242 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e0fb34a1-8ac1-464e-964c-c497603ff11f","Type":"ContainerStarted","Data":"41c2feae30319db74ed0070366aeacba2673e8c7ee8a73bf72f82364ecf842d2"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.893267 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e0fb34a1-8ac1-464e-964c-c497603ff11f","Type":"ContainerStarted","Data":"9c27beff9fd9b342fc5d1de5421c2427c05c012c3f67f9619e0f59fef0745832"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.893951 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.894653 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"3b7784db-1198-4bd4-bed0-da049559613b","Type":"ContainerStarted","Data":"77d287c8fd02e126d5db68498dccd68ac34698520dcb57d967466851ef6adc47"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.898487 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dg49w" event={"ID":"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d","Type":"ContainerStarted","Data":"8d71bb4928fa54b6f37481baf35f506d97bd2e70fd3b905b3f846851e0cd95a0"} Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.931297 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.931281121 podStartE2EDuration="2.931281121s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:39.925144309 +0000 UTC m=+1137.127407081" watchObservedRunningTime="2026-01-26 14:28:39.931281121 +0000 UTC m=+1137.133543893" Jan 26 14:28:39 crc kubenswrapper[4922]: I0126 14:28:39.946884 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dg49w" podStartSLOduration=2.946868561 podStartE2EDuration="2.946868561s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:28:39.940207144 +0000 UTC m=+1137.142469936" watchObservedRunningTime="2026-01-26 14:28:39.946868561 +0000 UTC m=+1137.149131333" Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.099381 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.596587 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-865758bb69-6rftr"] Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.911627 4922 generic.go:334] "Generic (PLEG): container finished" podID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerID="fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe" exitCode=0 Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.912880 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" event={"ID":"9d9d5644-40b7-49a4-9d01-6158dcb79c3d","Type":"ContainerDied","Data":"fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe"} Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.913001 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api-log" containerID="cri-o://9c27beff9fd9b342fc5d1de5421c2427c05c012c3f67f9619e0f59fef0745832" gracePeriod=30 Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.914590 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" containerID="cri-o://41c2feae30319db74ed0070366aeacba2673e8c7ee8a73bf72f82364ecf842d2" gracePeriod=30 Jan 26 14:28:40 crc kubenswrapper[4922]: I0126 14:28:40.925232 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": EOF" Jan 26 14:28:41 crc kubenswrapper[4922]: I0126 14:28:41.306889 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:28:41 crc kubenswrapper[4922]: I0126 14:28:41.306951 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:28:41 crc kubenswrapper[4922]: E0126 14:28:41.586432 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0fb34a1_8ac1_464e_964c_c497603ff11f.slice/crio-conmon-9c27beff9fd9b342fc5d1de5421c2427c05c012c3f67f9619e0f59fef0745832.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:28:41 crc kubenswrapper[4922]: I0126 14:28:41.927480 4922 generic.go:334] "Generic (PLEG): container finished" podID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerID="9c27beff9fd9b342fc5d1de5421c2427c05c012c3f67f9619e0f59fef0745832" exitCode=143 Jan 26 14:28:41 crc kubenswrapper[4922]: I0126 14:28:41.927527 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e0fb34a1-8ac1-464e-964c-c497603ff11f","Type":"ContainerDied","Data":"9c27beff9fd9b342fc5d1de5421c2427c05c012c3f67f9619e0f59fef0745832"} Jan 26 14:28:42 crc kubenswrapper[4922]: I0126 14:28:42.765205 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.079574 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.234517 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-dns-svc\") pod \"16c33977-a379-4ffa-adda-234d9076c2dc\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.234599 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-nb\") pod \"16c33977-a379-4ffa-adda-234d9076c2dc\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.234664 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-config\") pod \"16c33977-a379-4ffa-adda-234d9076c2dc\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.234711 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-sb\") pod \"16c33977-a379-4ffa-adda-234d9076c2dc\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.234830 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt7q6\" (UniqueName: \"kubernetes.io/projected/16c33977-a379-4ffa-adda-234d9076c2dc-kube-api-access-xt7q6\") pod \"16c33977-a379-4ffa-adda-234d9076c2dc\" (UID: \"16c33977-a379-4ffa-adda-234d9076c2dc\") " Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.269504 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16c33977-a379-4ffa-adda-234d9076c2dc" (UID: "16c33977-a379-4ffa-adda-234d9076c2dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.271431 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c33977-a379-4ffa-adda-234d9076c2dc-kube-api-access-xt7q6" (OuterVolumeSpecName: "kube-api-access-xt7q6") pod "16c33977-a379-4ffa-adda-234d9076c2dc" (UID: "16c33977-a379-4ffa-adda-234d9076c2dc"). InnerVolumeSpecName "kube-api-access-xt7q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.286596 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16c33977-a379-4ffa-adda-234d9076c2dc" (UID: "16c33977-a379-4ffa-adda-234d9076c2dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.291323 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-config" (OuterVolumeSpecName: "config") pod "16c33977-a379-4ffa-adda-234d9076c2dc" (UID: "16c33977-a379-4ffa-adda-234d9076c2dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.302887 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16c33977-a379-4ffa-adda-234d9076c2dc" (UID: "16c33977-a379-4ffa-adda-234d9076c2dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.338459 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt7q6\" (UniqueName: \"kubernetes.io/projected/16c33977-a379-4ffa-adda-234d9076c2dc-kube-api-access-xt7q6\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.338495 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.338505 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.338513 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.338523 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c33977-a379-4ffa-adda-234d9076c2dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.674183 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": read tcp 10.217.0.2:44756->10.217.0.152:9322: read: connection reset by peer" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.674654 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.947899 4922 generic.go:334] "Generic (PLEG): container finished" podID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerID="41c2feae30319db74ed0070366aeacba2673e8c7ee8a73bf72f82364ecf842d2" exitCode=0 Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.947942 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e0fb34a1-8ac1-464e-964c-c497603ff11f","Type":"ContainerDied","Data":"41c2feae30319db74ed0070366aeacba2673e8c7ee8a73bf72f82364ecf842d2"} Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.950129 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-865758bb69-6rftr" event={"ID":"b639ab62-28b8-4871-84d0-f28774600973","Type":"ContainerStarted","Data":"cb7de69edfac8494f18c0ef1ff46ba96990e171ae9ff667bacc034e40746bbfd"} Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.951799 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" event={"ID":"16c33977-a379-4ffa-adda-234d9076c2dc","Type":"ContainerDied","Data":"f60050ef0f9970af808256b07e4c8a109f2540e534d81f875701ce980cbab78d"} Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.951825 4922 scope.go:117] "RemoveContainer" containerID="af7c41878719da07d9733ce515b5dd39b64e73a22ab4a5f78a70318de3dbe28c" Jan 26 14:28:43 crc kubenswrapper[4922]: I0126 14:28:43.951948 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6655b9f-l7lqx" Jan 26 14:28:44 crc kubenswrapper[4922]: I0126 14:28:44.019908 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6655b9f-l7lqx"] Jan 26 14:28:44 crc kubenswrapper[4922]: I0126 14:28:44.031628 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df6655b9f-l7lqx"] Jan 26 14:28:44 crc kubenswrapper[4922]: I0126 14:28:44.961097 4922 generic.go:334] "Generic (PLEG): container finished" podID="2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" containerID="8d71bb4928fa54b6f37481baf35f506d97bd2e70fd3b905b3f846851e0cd95a0" exitCode=0 Jan 26 14:28:44 crc kubenswrapper[4922]: I0126 14:28:44.961209 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dg49w" event={"ID":"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d","Type":"ContainerDied","Data":"8d71bb4928fa54b6f37481baf35f506d97bd2e70fd3b905b3f846851e0cd95a0"} Jan 26 14:28:45 crc kubenswrapper[4922]: I0126 14:28:45.102983 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c33977-a379-4ffa-adda-234d9076c2dc" path="/var/lib/kubelet/pods/16c33977-a379-4ffa-adda-234d9076c2dc/volumes" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.306321 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c565fcfc7-gdkrq"] Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.339190 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b4f749b44-2qdw7"] Jan 26 14:28:46 crc kubenswrapper[4922]: E0126 14:28:46.339614 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c33977-a379-4ffa-adda-234d9076c2dc" containerName="init" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.339631 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c33977-a379-4ffa-adda-234d9076c2dc" containerName="init" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.339824 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c33977-a379-4ffa-adda-234d9076c2dc" containerName="init" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.340812 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.345861 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.346885 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b4f749b44-2qdw7"] Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.408523 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-scripts\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.408782 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhd4h\" (UniqueName: \"kubernetes.io/projected/9bccd630-51ec-481b-97c6-1f2757dfc685-kube-api-access-vhd4h\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.408916 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-combined-ca-bundle\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.408998 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bccd630-51ec-481b-97c6-1f2757dfc685-logs\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.409080 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-tls-certs\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.409169 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-config-data\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.409339 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-secret-key\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.413318 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-865758bb69-6rftr"] Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.447608 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c779658fd-pldff"] Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.448939 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.474996 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c779658fd-pldff"] Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.510955 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-scripts\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhd4h\" (UniqueName: \"kubernetes.io/projected/9bccd630-51ec-481b-97c6-1f2757dfc685-kube-api-access-vhd4h\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511192 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-combined-ca-bundle\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511220 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bccd630-51ec-481b-97c6-1f2757dfc685-logs\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511241 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-tls-certs\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511269 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-config-data\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511294 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-secret-key\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511635 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bccd630-51ec-481b-97c6-1f2757dfc685-logs\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.511949 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-scripts\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.512972 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-config-data\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.517823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-tls-certs\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.517936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-combined-ca-bundle\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.518673 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-secret-key\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.540785 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhd4h\" (UniqueName: \"kubernetes.io/projected/9bccd630-51ec-481b-97c6-1f2757dfc685-kube-api-access-vhd4h\") pod \"horizon-7b4f749b44-2qdw7\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-horizon-tls-certs\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621281 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-horizon-secret-key\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621333 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-logs\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621493 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-combined-ca-bundle\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621695 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-config-data\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621818 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-scripts\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.621860 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qn6w\" (UniqueName: \"kubernetes.io/projected/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-kube-api-access-2qn6w\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.670027 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.723948 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-combined-ca-bundle\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724035 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-config-data\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724096 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-scripts\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724119 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qn6w\" (UniqueName: \"kubernetes.io/projected/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-kube-api-access-2qn6w\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724155 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-horizon-tls-certs\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724185 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-horizon-secret-key\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724202 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-logs\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.724580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-logs\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.726333 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-config-data\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.727518 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-scripts\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.729966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-horizon-secret-key\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.730823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-horizon-tls-certs\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.733727 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-combined-ca-bundle\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.743712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qn6w\" (UniqueName: \"kubernetes.io/projected/0c995c1b-6b75-4638-a5d7-1df1539dcaeb-kube-api-access-2qn6w\") pod \"horizon-6c779658fd-pldff\" (UID: \"0c995c1b-6b75-4638-a5d7-1df1539dcaeb\") " pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:46 crc kubenswrapper[4922]: I0126 14:28:46.782843 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:28:52 crc kubenswrapper[4922]: I0126 14:28:52.765846 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:28:53 crc kubenswrapper[4922]: E0126 14:28:53.889786 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 26 14:28:53 crc kubenswrapper[4922]: E0126 14:28:53.890143 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 26 14:28:53 crc kubenswrapper[4922]: E0126 14:28:53.890293 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.230:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6g8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-xdqj7_openstack(55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:28:53 crc kubenswrapper[4922]: E0126 14:28:53.891550 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-xdqj7" podUID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" Jan 26 14:28:54 crc kubenswrapper[4922]: E0126 14:28:54.087584 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-xdqj7" podUID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" Jan 26 14:28:57 crc kubenswrapper[4922]: I0126 14:28:57.767388 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.717320 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.717471 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.717674 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55dh5c9h558hbhc7hd9h664h695h76h5ffhf4h5d7h695h68bh5dch57bh556h5b9h597h6h575h667h555h5b6h97h667h9h585h5f5hbfh67dh5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5fb94c898f-56t8m_openstack(9af68ec3-0690-4a73-9f25-3a50652fbe34): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.720707 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5fb94c898f-56t8m" podUID="9af68ec3-0690-4a73-9f25-3a50652fbe34" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.738184 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.738249 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.738421 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h94h67dh5c7h9bh55ch9chc4h54hbch566h664h5dch58h5f6h655h5bch85h65dh589h5f7hcch5c6hcch554h5f8h695h87h5c5h584h699h546q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgfdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6c565fcfc7-gdkrq_openstack(5df15689-477e-4c5b-a42f-f43a103f7a2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:28:59 crc kubenswrapper[4922]: E0126 14:28:59.743044 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6c565fcfc7-gdkrq" podUID="5df15689-477e-4c5b-a42f-f43a103f7a2e" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.105635 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.105697 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.105833 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.230:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc5h595h574hd5hf4h696h5d5h9h9fh66bh586hc4h657h685h688h8fhbbh584h5b4h587h59dh658h58h56bh584h595h5cdh5f4h698h67bhc4h7fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntrs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4483d7ac-397e-4220-82f3-c6832fe69c2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.142400 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.142444 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.142562 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n597h6h64fh5cbh5fdh655h88h6h4h579h678h68h578h5c6hd8hf8hfdh546h5cbh65dh67h55ch574h8ch9fhbdh99hb5h57bhc7hd6h95q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxrp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-865758bb69-6rftr_openstack(b639ab62-28b8-4871-84d0-f28774600973): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.145500 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e0fb34a1-8ac1-464e-964c-c497603ff11f","Type":"ContainerDied","Data":"f3403b84c03f9d59baa935e1a87aa58ad64b41b0eb9abec5ae474da0a5cf4557"} Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.145532 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3403b84c03f9d59baa935e1a87aa58ad64b41b0eb9abec5ae474da0a5cf4557" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.146371 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-865758bb69-6rftr" podUID="b639ab62-28b8-4871-84d0-f28774600973" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.148693 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dg49w" event={"ID":"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d","Type":"ContainerDied","Data":"27fa0066c5a2b95b5c2e130010591e823f38829927f1a2e7541cc84eb4223986"} Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.148726 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27fa0066c5a2b95b5c2e130010591e823f38829927f1a2e7541cc84eb4223986" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.244884 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.259650 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.333759 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-config-data\") pod \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.333850 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-custom-prometheus-ca\") pod \"e0fb34a1-8ac1-464e-964c-c497603ff11f\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.333950 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-combined-ca-bundle\") pod \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.333981 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-combined-ca-bundle\") pod \"e0fb34a1-8ac1-464e-964c-c497603ff11f\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334005 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-credential-keys\") pod \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334045 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-config-data\") pod \"e0fb34a1-8ac1-464e-964c-c497603ff11f\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334090 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fb34a1-8ac1-464e-964c-c497603ff11f-logs\") pod \"e0fb34a1-8ac1-464e-964c-c497603ff11f\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334142 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-scripts\") pod \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334201 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bnqc\" (UniqueName: \"kubernetes.io/projected/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-kube-api-access-8bnqc\") pod \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334231 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-fernet-keys\") pod \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\" (UID: \"2045ac4f-f8dc-4b86-8f2a-a1f770feac2d\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.334277 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgqx\" (UniqueName: \"kubernetes.io/projected/e0fb34a1-8ac1-464e-964c-c497603ff11f-kube-api-access-glgqx\") pod \"e0fb34a1-8ac1-464e-964c-c497603ff11f\" (UID: \"e0fb34a1-8ac1-464e-964c-c497603ff11f\") " Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.337331 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0fb34a1-8ac1-464e-964c-c497603ff11f-logs" (OuterVolumeSpecName: "logs") pod "e0fb34a1-8ac1-464e-964c-c497603ff11f" (UID: "e0fb34a1-8ac1-464e-964c-c497603ff11f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.357282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-scripts" (OuterVolumeSpecName: "scripts") pod "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" (UID: "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.363538 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" (UID: "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.364316 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" (UID: "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.369998 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" (UID: "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.370613 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0fb34a1-8ac1-464e-964c-c497603ff11f" (UID: "e0fb34a1-8ac1-464e-964c-c497603ff11f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.372115 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-kube-api-access-8bnqc" (OuterVolumeSpecName: "kube-api-access-8bnqc") pod "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" (UID: "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d"). InnerVolumeSpecName "kube-api-access-8bnqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.378029 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-config-data" (OuterVolumeSpecName: "config-data") pod "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" (UID: "2045ac4f-f8dc-4b86-8f2a-a1f770feac2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.378353 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fb34a1-8ac1-464e-964c-c497603ff11f-kube-api-access-glgqx" (OuterVolumeSpecName: "kube-api-access-glgqx") pod "e0fb34a1-8ac1-464e-964c-c497603ff11f" (UID: "e0fb34a1-8ac1-464e-964c-c497603ff11f"). InnerVolumeSpecName "kube-api-access-glgqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.391980 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e0fb34a1-8ac1-464e-964c-c497603ff11f" (UID: "e0fb34a1-8ac1-464e-964c-c497603ff11f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.420249 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-config-data" (OuterVolumeSpecName: "config-data") pod "e0fb34a1-8ac1-464e-964c-c497603ff11f" (UID: "e0fb34a1-8ac1-464e-964c-c497603ff11f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436218 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436264 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgqx\" (UniqueName: \"kubernetes.io/projected/e0fb34a1-8ac1-464e-964c-c497603ff11f-kube-api-access-glgqx\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436280 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436292 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436302 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436315 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436326 4922 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436337 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fb34a1-8ac1-464e-964c-c497603ff11f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436347 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fb34a1-8ac1-464e-964c-c497603ff11f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436358 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: I0126 14:29:00.436369 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bnqc\" (UniqueName: \"kubernetes.io/projected/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d-kube-api-access-8bnqc\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.946038 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.946122 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.946284 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.230:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzxhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-nzdsh_openstack(64dc8567-a56e-4cf4-8155-5b06c405f7ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:29:00 crc kubenswrapper[4922]: E0126 14:29:00.948212 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-nzdsh" podUID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.162620 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dg49w" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.162864 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: E0126 14:29:01.165763 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-nzdsh" podUID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.224416 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.233492 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.247690 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:01 crc kubenswrapper[4922]: E0126 14:29:01.248151 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" containerName="keystone-bootstrap" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.248167 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" containerName="keystone-bootstrap" Jan 26 14:29:01 crc kubenswrapper[4922]: E0126 14:29:01.248182 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api-log" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.248190 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api-log" Jan 26 14:29:01 crc kubenswrapper[4922]: E0126 14:29:01.248222 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.248231 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.248475 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" containerName="keystone-bootstrap" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.248500 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.248514 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api-log" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.249689 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.261554 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.270477 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.365270 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b57a3e81-4196-42d2-ac31-1b772322f93d-logs\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.365370 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzn2\" (UniqueName: \"kubernetes.io/projected/b57a3e81-4196-42d2-ac31-1b772322f93d-kube-api-access-mjzn2\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.365415 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.365471 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-config-data\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.365515 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.374580 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dg49w"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.384163 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dg49w"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.472113 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b57a3e81-4196-42d2-ac31-1b772322f93d-logs\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.472234 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzn2\" (UniqueName: \"kubernetes.io/projected/b57a3e81-4196-42d2-ac31-1b772322f93d-kube-api-access-mjzn2\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.472285 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.472342 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-config-data\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.472393 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.472547 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b57a3e81-4196-42d2-ac31-1b772322f93d-logs\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.476025 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.476156 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.483118 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-config-data\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.487550 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzn2\" (UniqueName: \"kubernetes.io/projected/b57a3e81-4196-42d2-ac31-1b772322f93d-kube-api-access-mjzn2\") pod \"watcher-api-0\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.494149 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kdsxs"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.495533 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.498483 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.498889 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j5sp8" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.499086 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.499286 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.499453 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.505817 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdsxs"] Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.574431 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-credential-keys\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.574747 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxncd\" (UniqueName: \"kubernetes.io/projected/079ac494-9665-4c61-9ec5-47628d00d8bc-kube-api-access-lxncd\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.575027 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-config-data\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.575105 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-combined-ca-bundle\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.575159 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-fernet-keys\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.575230 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-scripts\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.595103 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.677247 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-config-data\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.677337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-combined-ca-bundle\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.677389 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-fernet-keys\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.677425 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-scripts\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.677457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-credential-keys\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.677476 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxncd\" (UniqueName: \"kubernetes.io/projected/079ac494-9665-4c61-9ec5-47628d00d8bc-kube-api-access-lxncd\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.681216 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-fernet-keys\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.681309 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-scripts\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.681025 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-config-data\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.682268 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-credential-keys\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.688342 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-combined-ca-bundle\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.695875 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxncd\" (UniqueName: \"kubernetes.io/projected/079ac494-9665-4c61-9ec5-47628d00d8bc-kube-api-access-lxncd\") pod \"keystone-bootstrap-kdsxs\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:01 crc kubenswrapper[4922]: I0126 14:29:01.885773 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:02 crc kubenswrapper[4922]: I0126 14:29:02.768099 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:29:03 crc kubenswrapper[4922]: I0126 14:29:03.109831 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2045ac4f-f8dc-4b86-8f2a-a1f770feac2d" path="/var/lib/kubelet/pods/2045ac4f-f8dc-4b86-8f2a-a1f770feac2d/volumes" Jan 26 14:29:03 crc kubenswrapper[4922]: I0126 14:29:03.111243 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fb34a1-8ac1-464e-964c-c497603ff11f" path="/var/lib/kubelet/pods/e0fb34a1-8ac1-464e-964c-c497603ff11f/volumes" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.307196 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.307820 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.307867 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.308741 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"579737a5aa8bb32a4f554c6e647711e28b3e50a7ec3de0bd2d82dee5d94940f2"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.308804 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://579737a5aa8bb32a4f554c6e647711e28b3e50a7ec3de0bd2d82dee5d94940f2" gracePeriod=600 Jan 26 14:29:11 crc kubenswrapper[4922]: E0126 14:29:11.525245 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 14:29:11 crc kubenswrapper[4922]: E0126 14:29:11.525316 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 14:29:11 crc kubenswrapper[4922]: E0126 14:29:11.525477 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64mnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-lt6mt_openstack(99c8b640-ac97-4a3e-8e4c-1781bd756396): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:29:11 crc kubenswrapper[4922]: E0126 14:29:11.526566 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-lt6mt" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.668472 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.684967 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.695643 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773730 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df15689-477e-4c5b-a42f-f43a103f7a2e-logs\") pod \"5df15689-477e-4c5b-a42f-f43a103f7a2e\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773798 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9af68ec3-0690-4a73-9f25-3a50652fbe34-horizon-secret-key\") pod \"9af68ec3-0690-4a73-9f25-3a50652fbe34\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773828 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af68ec3-0690-4a73-9f25-3a50652fbe34-logs\") pod \"9af68ec3-0690-4a73-9f25-3a50652fbe34\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773876 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-scripts\") pod \"9af68ec3-0690-4a73-9f25-3a50652fbe34\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773929 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-scripts\") pod \"b639ab62-28b8-4871-84d0-f28774600973\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773962 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-config-data\") pod \"5df15689-477e-4c5b-a42f-f43a103f7a2e\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.773985 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-scripts\") pod \"5df15689-477e-4c5b-a42f-f43a103f7a2e\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774015 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b639ab62-28b8-4871-84d0-f28774600973-horizon-secret-key\") pod \"b639ab62-28b8-4871-84d0-f28774600973\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774039 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-config-data\") pod \"b639ab62-28b8-4871-84d0-f28774600973\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774095 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgfdw\" (UniqueName: \"kubernetes.io/projected/5df15689-477e-4c5b-a42f-f43a103f7a2e-kube-api-access-lgfdw\") pod \"5df15689-477e-4c5b-a42f-f43a103f7a2e\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774129 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b639ab62-28b8-4871-84d0-f28774600973-logs\") pod \"b639ab62-28b8-4871-84d0-f28774600973\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774157 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5df15689-477e-4c5b-a42f-f43a103f7a2e-horizon-secret-key\") pod \"5df15689-477e-4c5b-a42f-f43a103f7a2e\" (UID: \"5df15689-477e-4c5b-a42f-f43a103f7a2e\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774180 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxrp4\" (UniqueName: \"kubernetes.io/projected/b639ab62-28b8-4871-84d0-f28774600973-kube-api-access-kxrp4\") pod \"b639ab62-28b8-4871-84d0-f28774600973\" (UID: \"b639ab62-28b8-4871-84d0-f28774600973\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774274 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bt8\" (UniqueName: \"kubernetes.io/projected/9af68ec3-0690-4a73-9f25-3a50652fbe34-kube-api-access-49bt8\") pod \"9af68ec3-0690-4a73-9f25-3a50652fbe34\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774247 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af68ec3-0690-4a73-9f25-3a50652fbe34-logs" (OuterVolumeSpecName: "logs") pod "9af68ec3-0690-4a73-9f25-3a50652fbe34" (UID: "9af68ec3-0690-4a73-9f25-3a50652fbe34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774305 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df15689-477e-4c5b-a42f-f43a103f7a2e-logs" (OuterVolumeSpecName: "logs") pod "5df15689-477e-4c5b-a42f-f43a103f7a2e" (UID: "5df15689-477e-4c5b-a42f-f43a103f7a2e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774336 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-config-data\") pod \"9af68ec3-0690-4a73-9f25-3a50652fbe34\" (UID: \"9af68ec3-0690-4a73-9f25-3a50652fbe34\") " Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774724 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df15689-477e-4c5b-a42f-f43a103f7a2e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774738 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af68ec3-0690-4a73-9f25-3a50652fbe34-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.774789 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-scripts" (OuterVolumeSpecName: "scripts") pod "b639ab62-28b8-4871-84d0-f28774600973" (UID: "b639ab62-28b8-4871-84d0-f28774600973"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.775308 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-scripts" (OuterVolumeSpecName: "scripts") pod "9af68ec3-0690-4a73-9f25-3a50652fbe34" (UID: "9af68ec3-0690-4a73-9f25-3a50652fbe34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.775398 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-config-data" (OuterVolumeSpecName: "config-data") pod "9af68ec3-0690-4a73-9f25-3a50652fbe34" (UID: "9af68ec3-0690-4a73-9f25-3a50652fbe34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.775642 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b639ab62-28b8-4871-84d0-f28774600973-logs" (OuterVolumeSpecName: "logs") pod "b639ab62-28b8-4871-84d0-f28774600973" (UID: "b639ab62-28b8-4871-84d0-f28774600973"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.776105 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-config-data" (OuterVolumeSpecName: "config-data") pod "b639ab62-28b8-4871-84d0-f28774600973" (UID: "b639ab62-28b8-4871-84d0-f28774600973"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.778404 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af68ec3-0690-4a73-9f25-3a50652fbe34-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9af68ec3-0690-4a73-9f25-3a50652fbe34" (UID: "9af68ec3-0690-4a73-9f25-3a50652fbe34"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.779185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-config-data" (OuterVolumeSpecName: "config-data") pod "5df15689-477e-4c5b-a42f-f43a103f7a2e" (UID: "5df15689-477e-4c5b-a42f-f43a103f7a2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.779206 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df15689-477e-4c5b-a42f-f43a103f7a2e-kube-api-access-lgfdw" (OuterVolumeSpecName: "kube-api-access-lgfdw") pod "5df15689-477e-4c5b-a42f-f43a103f7a2e" (UID: "5df15689-477e-4c5b-a42f-f43a103f7a2e"). InnerVolumeSpecName "kube-api-access-lgfdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.779587 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df15689-477e-4c5b-a42f-f43a103f7a2e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5df15689-477e-4c5b-a42f-f43a103f7a2e" (UID: "5df15689-477e-4c5b-a42f-f43a103f7a2e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.780024 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-scripts" (OuterVolumeSpecName: "scripts") pod "5df15689-477e-4c5b-a42f-f43a103f7a2e" (UID: "5df15689-477e-4c5b-a42f-f43a103f7a2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.781181 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b639ab62-28b8-4871-84d0-f28774600973-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b639ab62-28b8-4871-84d0-f28774600973" (UID: "b639ab62-28b8-4871-84d0-f28774600973"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.784416 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af68ec3-0690-4a73-9f25-3a50652fbe34-kube-api-access-49bt8" (OuterVolumeSpecName: "kube-api-access-49bt8") pod "9af68ec3-0690-4a73-9f25-3a50652fbe34" (UID: "9af68ec3-0690-4a73-9f25-3a50652fbe34"). InnerVolumeSpecName "kube-api-access-49bt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.785428 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b639ab62-28b8-4871-84d0-f28774600973-kube-api-access-kxrp4" (OuterVolumeSpecName: "kube-api-access-kxrp4") pod "b639ab62-28b8-4871-84d0-f28774600973" (UID: "b639ab62-28b8-4871-84d0-f28774600973"). InnerVolumeSpecName "kube-api-access-kxrp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.876966 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877010 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877022 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5df15689-477e-4c5b-a42f-f43a103f7a2e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877031 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b639ab62-28b8-4871-84d0-f28774600973-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877044 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b639ab62-28b8-4871-84d0-f28774600973-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877053 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgfdw\" (UniqueName: \"kubernetes.io/projected/5df15689-477e-4c5b-a42f-f43a103f7a2e-kube-api-access-lgfdw\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877085 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b639ab62-28b8-4871-84d0-f28774600973-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877094 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5df15689-477e-4c5b-a42f-f43a103f7a2e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877102 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxrp4\" (UniqueName: \"kubernetes.io/projected/b639ab62-28b8-4871-84d0-f28774600973-kube-api-access-kxrp4\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877111 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bt8\" (UniqueName: \"kubernetes.io/projected/9af68ec3-0690-4a73-9f25-3a50652fbe34-kube-api-access-49bt8\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877120 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877128 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9af68ec3-0690-4a73-9f25-3a50652fbe34-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:11 crc kubenswrapper[4922]: I0126 14:29:11.877135 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9af68ec3-0690-4a73-9f25-3a50652fbe34-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.299882 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb94c898f-56t8m" event={"ID":"9af68ec3-0690-4a73-9f25-3a50652fbe34","Type":"ContainerDied","Data":"85be9bd67def3ac03889653fcf297f6f25a3513e8be9656c994092e1afc8d6d7"} Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.300235 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb94c898f-56t8m" Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.323262 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-865758bb69-6rftr" event={"ID":"b639ab62-28b8-4871-84d0-f28774600973","Type":"ContainerDied","Data":"cb7de69edfac8494f18c0ef1ff46ba96990e171ae9ff667bacc034e40746bbfd"} Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.323359 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-865758bb69-6rftr" Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.363560 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="579737a5aa8bb32a4f554c6e647711e28b3e50a7ec3de0bd2d82dee5d94940f2" exitCode=0 Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.363633 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"579737a5aa8bb32a4f554c6e647711e28b3e50a7ec3de0bd2d82dee5d94940f2"} Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.363668 4922 scope.go:117] "RemoveContainer" containerID="38f65164faf1c2f39140b3ebf8dc530554515c361f23b474730bf8efbdde8f32" Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.369613 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c565fcfc7-gdkrq" event={"ID":"5df15689-477e-4c5b-a42f-f43a103f7a2e","Type":"ContainerDied","Data":"07531e6f72e5610c361282259b9ddc888371c9a3e3536c8fbe71afdc690b0374"} Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.369684 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c565fcfc7-gdkrq" Jan 26 14:29:12 crc kubenswrapper[4922]: E0126 14:29:12.378368 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af68ec3_0690_4a73_9f25_3a50652fbe34.slice\": RecentStats: unable to find data in memory cache]" Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.434158 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fb94c898f-56t8m"] Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.457850 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fb94c898f-56t8m"] Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.482964 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-865758bb69-6rftr"] Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.494480 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-865758bb69-6rftr"] Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.508149 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c565fcfc7-gdkrq"] Jan 26 14:29:12 crc kubenswrapper[4922]: I0126 14:29:12.516283 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c565fcfc7-gdkrq"] Jan 26 14:29:13 crc kubenswrapper[4922]: I0126 14:29:13.107023 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df15689-477e-4c5b-a42f-f43a103f7a2e" path="/var/lib/kubelet/pods/5df15689-477e-4c5b-a42f-f43a103f7a2e/volumes" Jan 26 14:29:13 crc kubenswrapper[4922]: I0126 14:29:13.108005 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af68ec3-0690-4a73-9f25-3a50652fbe34" path="/var/lib/kubelet/pods/9af68ec3-0690-4a73-9f25-3a50652fbe34/volumes" Jan 26 14:29:13 crc kubenswrapper[4922]: I0126 14:29:13.108757 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b639ab62-28b8-4871-84d0-f28774600973" path="/var/lib/kubelet/pods/b639ab62-28b8-4871-84d0-f28774600973/volumes" Jan 26 14:29:14 crc kubenswrapper[4922]: E0126 14:29:14.319197 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 14:29:14 crc kubenswrapper[4922]: E0126 14:29:14.319617 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.230:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 14:29:14 crc kubenswrapper[4922]: E0126 14:29:14.319818 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.230:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x68hz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rvk6w_openstack(91754680-73d8-4c72-a7bd-834959e192a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:29:14 crc kubenswrapper[4922]: E0126 14:29:14.322018 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rvk6w" podUID="91754680-73d8-4c72-a7bd-834959e192a1" Jan 26 14:29:14 crc kubenswrapper[4922]: E0126 14:29:14.388542 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-rvk6w" podUID="91754680-73d8-4c72-a7bd-834959e192a1" Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.334777 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c779658fd-pldff"] Jan 26 14:29:15 crc kubenswrapper[4922]: W0126 14:29:15.347825 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c995c1b_6b75_4638_a5d7_1df1539dcaeb.slice/crio-cbed027343d4d3cd4ea801ab148d2ad025a62a869f18682caf0fc2b6439baeb6 WatchSource:0}: Error finding container cbed027343d4d3cd4ea801ab148d2ad025a62a869f18682caf0fc2b6439baeb6: Status 404 returned error can't find the container with id cbed027343d4d3cd4ea801ab148d2ad025a62a869f18682caf0fc2b6439baeb6 Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.362902 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b4f749b44-2qdw7"] Jan 26 14:29:15 crc kubenswrapper[4922]: W0126 14:29:15.370312 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bccd630_51ec_481b_97c6_1f2757dfc685.slice/crio-82f9ca8c7e66e8fa845c0ad5b02a0e7614a68d93238f9e95d8efcaae5b2ca71e WatchSource:0}: Error finding container 82f9ca8c7e66e8fa845c0ad5b02a0e7614a68d93238f9e95d8efcaae5b2ca71e: Status 404 returned error can't find the container with id 82f9ca8c7e66e8fa845c0ad5b02a0e7614a68d93238f9e95d8efcaae5b2ca71e Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.401370 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"eed296e68f83239b3275538da1ff7df64bfd086ff1a6954732abb43e082dbdef"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.403766 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" event={"ID":"9d9d5644-40b7-49a4-9d01-6158dcb79c3d","Type":"ContainerStarted","Data":"571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.403955 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.406572 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4483d7ac-397e-4220-82f3-c6832fe69c2e","Type":"ContainerStarted","Data":"ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.409539 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"3b7784db-1198-4bd4-bed0-da049559613b","Type":"ContainerStarted","Data":"e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.416479 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"0fe8483d01fe17dae14bd575d394e895ec02b281bd5bc48e80a4af9b52b57371"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.427746 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" podStartSLOduration=38.427723063 podStartE2EDuration="38.427723063s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:15.42051686 +0000 UTC m=+1172.622779632" watchObservedRunningTime="2026-01-26 14:29:15.427723063 +0000 UTC m=+1172.629985835" Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.432495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b4f749b44-2qdw7" event={"ID":"9bccd630-51ec-481b-97c6-1f2757dfc685","Type":"ContainerStarted","Data":"82f9ca8c7e66e8fa845c0ad5b02a0e7614a68d93238f9e95d8efcaae5b2ca71e"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.434505 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c779658fd-pldff" event={"ID":"0c995c1b-6b75-4638-a5d7-1df1539dcaeb","Type":"ContainerStarted","Data":"cbed027343d4d3cd4ea801ab148d2ad025a62a869f18682caf0fc2b6439baeb6"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.436028 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xdqj7" event={"ID":"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9","Type":"ContainerStarted","Data":"61ac71e3b146021f3bb778b4b59391f795af61f8ae1751d232d0bac87fb85fdd"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.439793 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"41ec42e5-bf03-41f3-93cf-e18347511ed0","Type":"ContainerStarted","Data":"815bf6b074d13cc05b5cba0cf155b88738b12800c13b8dc53fdf0c2d875bc578"} Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.448307 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdsxs"] Jan 26 14:29:15 crc kubenswrapper[4922]: W0126 14:29:15.465247 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb57a3e81_4196_42d2_ac31_1b772322f93d.slice/crio-6870e546f925284f6b27a837141378136d226f1c81ce628044bf846069606b06 WatchSource:0}: Error finding container 6870e546f925284f6b27a837141378136d226f1c81ce628044bf846069606b06: Status 404 returned error can't find the container with id 6870e546f925284f6b27a837141378136d226f1c81ce628044bf846069606b06 Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.470316 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.477633 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.113586127 podStartE2EDuration="38.477613938s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="2026-01-26 14:28:39.190221713 +0000 UTC m=+1136.392484485" lastFinishedPulling="2026-01-26 14:29:11.554249524 +0000 UTC m=+1168.756512296" observedRunningTime="2026-01-26 14:29:15.4659803 +0000 UTC m=+1172.668243072" watchObservedRunningTime="2026-01-26 14:29:15.477613938 +0000 UTC m=+1172.679876710" Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.497095 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xdqj7" podStartSLOduration=3.236822025 podStartE2EDuration="38.497079076s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="2026-01-26 14:28:39.68480034 +0000 UTC m=+1136.887063112" lastFinishedPulling="2026-01-26 14:29:14.945057391 +0000 UTC m=+1172.147320163" observedRunningTime="2026-01-26 14:29:15.490917853 +0000 UTC m=+1172.693180635" watchObservedRunningTime="2026-01-26 14:29:15.497079076 +0000 UTC m=+1172.699341848" Jan 26 14:29:15 crc kubenswrapper[4922]: I0126 14:29:15.522570 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=16.35139699 podStartE2EDuration="38.522556293s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="2026-01-26 14:28:38.761023657 +0000 UTC m=+1135.963286419" lastFinishedPulling="2026-01-26 14:29:00.93218295 +0000 UTC m=+1158.134445722" observedRunningTime="2026-01-26 14:29:15.520757893 +0000 UTC m=+1172.723020685" watchObservedRunningTime="2026-01-26 14:29:15.522556293 +0000 UTC m=+1172.724819065" Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.450155 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b57a3e81-4196-42d2-ac31-1b772322f93d","Type":"ContainerStarted","Data":"1315765da1810dedcf08ad9e84bf6e5a3b8d7b0f3576e93aaf583e8d1063f23c"} Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.450984 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b57a3e81-4196-42d2-ac31-1b772322f93d","Type":"ContainerStarted","Data":"6870e546f925284f6b27a837141378136d226f1c81ce628044bf846069606b06"} Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.454116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdsxs" event={"ID":"079ac494-9665-4c61-9ec5-47628d00d8bc","Type":"ContainerStarted","Data":"16b745693bcab6d0f9232d87e8bb69872326118be3758e747c896eccdb6e0620"} Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.454210 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdsxs" event={"ID":"079ac494-9665-4c61-9ec5-47628d00d8bc","Type":"ContainerStarted","Data":"2b552dc47847c5f56f00613bbc4f3f776472ec7fb05be2995934b5c6eed8e5f5"} Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.461889 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b4f749b44-2qdw7" event={"ID":"9bccd630-51ec-481b-97c6-1f2757dfc685","Type":"ContainerStarted","Data":"c2474915dcbbbd3a768fade598d9d0b2e8243b6212b803776364f82a9316297c"} Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.465281 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c779658fd-pldff" event={"ID":"0c995c1b-6b75-4638-a5d7-1df1539dcaeb","Type":"ContainerStarted","Data":"259a658877712afe5ab88bb76b1b5d04301489bca18ad017fca19c302f92da76"} Jan 26 14:29:16 crc kubenswrapper[4922]: I0126 14:29:16.519415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"2c6923c86b044682b604b73ef6fcfb6553f424a467b8e5027eb7f5915e6e43c7"} Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.537641 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b57a3e81-4196-42d2-ac31-1b772322f93d","Type":"ContainerStarted","Data":"10bf5967a4ae2e989da515f21a43b102dcabe83ab29c09cc7577df2eb47f7643"} Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.542314 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b4f749b44-2qdw7" event={"ID":"9bccd630-51ec-481b-97c6-1f2757dfc685","Type":"ContainerStarted","Data":"0142e706f07caaac11da2ed37cb72515ef4a009fdcfce539c86464a124266490"} Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.545232 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c779658fd-pldff" event={"ID":"0c995c1b-6b75-4638-a5d7-1df1539dcaeb","Type":"ContainerStarted","Data":"8e7e766317cbcdcf373e2f7c34956351e452d3264203d3754e5a84c2ea7a9dab"} Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.551044 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"c03c7bb1948a3f7424158c9ed9ec730754fadb4c346863bfa58d049d65a71c51"} Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.578685 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kdsxs" podStartSLOduration=16.578647174 podStartE2EDuration="16.578647174s" podCreationTimestamp="2026-01-26 14:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:17.567631364 +0000 UTC m=+1174.769894146" watchObservedRunningTime="2026-01-26 14:29:17.578647174 +0000 UTC m=+1174.780909946" Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.733928 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:17 crc kubenswrapper[4922]: I0126 14:29:17.763892 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.194157 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.194494 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.226545 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.562177 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nzdsh" event={"ID":"64dc8567-a56e-4cf4-8155-5b06c405f7ba","Type":"ContainerStarted","Data":"12325cb746235a5035dcbb0b5e62405626e7dd4ecbfbf27aa59d5e9353bf71a8"} Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.567503 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"4e64f52afca03e81359cbf037e083c42c10cb4ed80e2456d34be00cbfeafe500"} Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.568685 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.569337 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.582591 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nzdsh" podStartSLOduration=2.742576607 podStartE2EDuration="41.582576776s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="2026-01-26 14:28:39.274385873 +0000 UTC m=+1136.476648645" lastFinishedPulling="2026-01-26 14:29:18.114386052 +0000 UTC m=+1175.316648814" observedRunningTime="2026-01-26 14:29:18.577116992 +0000 UTC m=+1175.779379764" watchObservedRunningTime="2026-01-26 14:29:18.582576776 +0000 UTC m=+1175.784839548" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.628864 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.632036 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.642045 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=17.64201766 podStartE2EDuration="17.64201766s" podCreationTimestamp="2026-01-26 14:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:18.631447492 +0000 UTC m=+1175.833710284" watchObservedRunningTime="2026-01-26 14:29:18.64201766 +0000 UTC m=+1175.844280442" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.644585 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b4f749b44-2qdw7" podStartSLOduration=32.544742431 podStartE2EDuration="32.644575222s" podCreationTimestamp="2026-01-26 14:28:46 +0000 UTC" firstStartedPulling="2026-01-26 14:29:15.375520333 +0000 UTC m=+1172.577783105" lastFinishedPulling="2026-01-26 14:29:15.475353114 +0000 UTC m=+1172.677615896" observedRunningTime="2026-01-26 14:29:18.607200339 +0000 UTC m=+1175.809463121" watchObservedRunningTime="2026-01-26 14:29:18.644575222 +0000 UTC m=+1175.846837994" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.666905 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c779658fd-pldff" podStartSLOduration=32.556595035 podStartE2EDuration="32.66687833s" podCreationTimestamp="2026-01-26 14:28:46 +0000 UTC" firstStartedPulling="2026-01-26 14:29:15.355112769 +0000 UTC m=+1172.557375541" lastFinishedPulling="2026-01-26 14:29:15.465396064 +0000 UTC m=+1172.667658836" observedRunningTime="2026-01-26 14:29:18.655966123 +0000 UTC m=+1175.858228915" watchObservedRunningTime="2026-01-26 14:29:18.66687833 +0000 UTC m=+1175.869141102" Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.752418 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:29:18 crc kubenswrapper[4922]: I0126 14:29:18.775112 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:29:19 crc kubenswrapper[4922]: I0126 14:29:19.594532 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"d280b2214e19b5f24a600c5e084683faf0f6e199dce9f843e171be99aa48aed4"} Jan 26 14:29:19 crc kubenswrapper[4922]: I0126 14:29:19.595149 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"eb363fe5168200c3a597ecae0e79cda5faa86a2874e7cf057655abf5fffff25b"} Jan 26 14:29:20 crc kubenswrapper[4922]: I0126 14:29:20.630776 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"158773f44795c8787f5e71544cb6e10fa313f0e91b231b72052511ff91540bfd"} Jan 26 14:29:20 crc kubenswrapper[4922]: I0126 14:29:20.631050 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"ffa3cacb85f8e7dfeedd18a3dd560e6aef8438b48e00a6a5686aab8069a4c988"} Jan 26 14:29:20 crc kubenswrapper[4922]: I0126 14:29:20.631016 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="41ec42e5-bf03-41f3-93cf-e18347511ed0" containerName="watcher-decision-engine" containerID="cri-o://815bf6b074d13cc05b5cba0cf155b88738b12800c13b8dc53fdf0c2d875bc578" gracePeriod=30 Jan 26 14:29:20 crc kubenswrapper[4922]: I0126 14:29:20.630912 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="3b7784db-1198-4bd4-bed0-da049559613b" containerName="watcher-applier" containerID="cri-o://e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" gracePeriod=30 Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.490667 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.596316 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.596363 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.615666 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.645859 4922 generic.go:334] "Generic (PLEG): container finished" podID="41ec42e5-bf03-41f3-93cf-e18347511ed0" containerID="815bf6b074d13cc05b5cba0cf155b88738b12800c13b8dc53fdf0c2d875bc578" exitCode=1 Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.645933 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"41ec42e5-bf03-41f3-93cf-e18347511ed0","Type":"ContainerDied","Data":"815bf6b074d13cc05b5cba0cf155b88738b12800c13b8dc53fdf0c2d875bc578"} Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.650784 4922 generic.go:334] "Generic (PLEG): container finished" podID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" containerID="61ac71e3b146021f3bb778b4b59391f795af61f8ae1751d232d0bac87fb85fdd" exitCode=0 Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.651215 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xdqj7" event={"ID":"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9","Type":"ContainerDied","Data":"61ac71e3b146021f3bb778b4b59391f795af61f8ae1751d232d0bac87fb85fdd"} Jan 26 14:29:21 crc kubenswrapper[4922]: I0126 14:29:21.675411 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 14:29:22 crc kubenswrapper[4922]: E0126 14:29:22.095006 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.230:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-lt6mt" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" Jan 26 14:29:22 crc kubenswrapper[4922]: I0126 14:29:22.665242 4922 generic.go:334] "Generic (PLEG): container finished" podID="3b7784db-1198-4bd4-bed0-da049559613b" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" exitCode=0 Jan 26 14:29:22 crc kubenswrapper[4922]: I0126 14:29:22.665369 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"3b7784db-1198-4bd4-bed0-da049559613b","Type":"ContainerDied","Data":"e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3"} Jan 26 14:29:23 crc kubenswrapper[4922]: E0126 14:29:23.195013 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 14:29:23 crc kubenswrapper[4922]: E0126 14:29:23.196250 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 14:29:23 crc kubenswrapper[4922]: E0126 14:29:23.196632 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 14:29:23 crc kubenswrapper[4922]: E0126 14:29:23.196717 4922 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="3b7784db-1198-4bd4-bed0-da049559613b" containerName="watcher-applier" Jan 26 14:29:23 crc kubenswrapper[4922]: I0126 14:29:23.414238 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:29:23 crc kubenswrapper[4922]: I0126 14:29:23.492659 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb8c8997-s9zsx"] Jan 26 14:29:23 crc kubenswrapper[4922]: I0126 14:29:23.492888 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="dnsmasq-dns" containerID="cri-o://fcb4e7b3fe98f08e98fe4a082932aff9e57a0602f7a965cbde94b4a0fd1e802a" gracePeriod=10 Jan 26 14:29:23 crc kubenswrapper[4922]: I0126 14:29:23.674580 4922 generic.go:334] "Generic (PLEG): container finished" podID="079ac494-9665-4c61-9ec5-47628d00d8bc" containerID="16b745693bcab6d0f9232d87e8bb69872326118be3758e747c896eccdb6e0620" exitCode=0 Jan 26 14:29:23 crc kubenswrapper[4922]: I0126 14:29:23.674663 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdsxs" event={"ID":"079ac494-9665-4c61-9ec5-47628d00d8bc","Type":"ContainerDied","Data":"16b745693bcab6d0f9232d87e8bb69872326118be3758e747c896eccdb6e0620"} Jan 26 14:29:24 crc kubenswrapper[4922]: I0126 14:29:24.563313 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Jan 26 14:29:25 crc kubenswrapper[4922]: I0126 14:29:25.060616 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:25 crc kubenswrapper[4922]: I0126 14:29:25.061209 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api-log" containerID="cri-o://1315765da1810dedcf08ad9e84bf6e5a3b8d7b0f3576e93aaf583e8d1063f23c" gracePeriod=30 Jan 26 14:29:25 crc kubenswrapper[4922]: I0126 14:29:25.061256 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api" containerID="cri-o://10bf5967a4ae2e989da515f21a43b102dcabe83ab29c09cc7577df2eb47f7643" gracePeriod=30 Jan 26 14:29:25 crc kubenswrapper[4922]: I0126 14:29:25.692230 4922 generic.go:334] "Generic (PLEG): container finished" podID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerID="fcb4e7b3fe98f08e98fe4a082932aff9e57a0602f7a965cbde94b4a0fd1e802a" exitCode=0 Jan 26 14:29:25 crc kubenswrapper[4922]: I0126 14:29:25.692310 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" event={"ID":"e4d5060c-9d3e-4517-8910-0ef46172a190","Type":"ContainerDied","Data":"fcb4e7b3fe98f08e98fe4a082932aff9e57a0602f7a965cbde94b4a0fd1e802a"} Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:26.595975 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:26.596548 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:26.671307 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:26.671374 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:26.783532 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:26.783729 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:27.712829 4922 generic.go:334] "Generic (PLEG): container finished" podID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerID="1315765da1810dedcf08ad9e84bf6e5a3b8d7b0f3576e93aaf583e8d1063f23c" exitCode=143 Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:27.713013 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b57a3e81-4196-42d2-ac31-1b772322f93d","Type":"ContainerDied","Data":"1315765da1810dedcf08ad9e84bf6e5a3b8d7b0f3576e93aaf583e8d1063f23c"} Jan 26 14:29:31 crc kubenswrapper[4922]: E0126 14:29:28.194148 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 14:29:31 crc kubenswrapper[4922]: E0126 14:29:28.194659 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 14:29:31 crc kubenswrapper[4922]: E0126 14:29:28.195003 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 14:29:31 crc kubenswrapper[4922]: E0126 14:29:28.195031 4922 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="3b7784db-1198-4bd4-bed0-da049559613b" containerName="watcher-applier" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:29.562629 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Jan 26 14:29:31 crc kubenswrapper[4922]: E0126 14:29:31.269419 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 26 14:29:31 crc kubenswrapper[4922]: E0126 14:29:31.269830 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntrs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4483d7ac-397e-4220-82f3-c6832fe69c2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.581530 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xdqj7" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.585512 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.597918 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.598554 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9322/\": dial tcp 10.217.0.165:9322: connect: connection refused" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.605439 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.727856 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-combined-ca-bundle\") pod \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.727925 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxncd\" (UniqueName: \"kubernetes.io/projected/079ac494-9665-4c61-9ec5-47628d00d8bc-kube-api-access-lxncd\") pod \"079ac494-9665-4c61-9ec5-47628d00d8bc\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728236 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-dns-svc\") pod \"e4d5060c-9d3e-4517-8910-0ef46172a190\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728261 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-scripts\") pod \"079ac494-9665-4c61-9ec5-47628d00d8bc\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728278 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-combined-ca-bundle\") pod \"079ac494-9665-4c61-9ec5-47628d00d8bc\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728323 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-nb\") pod \"e4d5060c-9d3e-4517-8910-0ef46172a190\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728381 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-config-data\") pod \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728416 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-sb\") pod \"e4d5060c-9d3e-4517-8910-0ef46172a190\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728445 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6g8d\" (UniqueName: \"kubernetes.io/projected/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-kube-api-access-l6g8d\") pod \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728460 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-scripts\") pod \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728488 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-credential-keys\") pod \"079ac494-9665-4c61-9ec5-47628d00d8bc\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728530 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvt26\" (UniqueName: \"kubernetes.io/projected/e4d5060c-9d3e-4517-8910-0ef46172a190-kube-api-access-gvt26\") pod \"e4d5060c-9d3e-4517-8910-0ef46172a190\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728549 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-config-data\") pod \"079ac494-9665-4c61-9ec5-47628d00d8bc\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728569 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-config\") pod \"e4d5060c-9d3e-4517-8910-0ef46172a190\" (UID: \"e4d5060c-9d3e-4517-8910-0ef46172a190\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728589 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-fernet-keys\") pod \"079ac494-9665-4c61-9ec5-47628d00d8bc\" (UID: \"079ac494-9665-4c61-9ec5-47628d00d8bc\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.728617 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-logs\") pod \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\" (UID: \"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.733524 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-logs" (OuterVolumeSpecName: "logs") pod "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" (UID: "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.753912 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "079ac494-9665-4c61-9ec5-47628d00d8bc" (UID: "079ac494-9665-4c61-9ec5-47628d00d8bc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.754000 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079ac494-9665-4c61-9ec5-47628d00d8bc-kube-api-access-lxncd" (OuterVolumeSpecName: "kube-api-access-lxncd") pod "079ac494-9665-4c61-9ec5-47628d00d8bc" (UID: "079ac494-9665-4c61-9ec5-47628d00d8bc"). InnerVolumeSpecName "kube-api-access-lxncd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.770302 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-scripts" (OuterVolumeSpecName: "scripts") pod "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" (UID: "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.776523 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.778852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d5060c-9d3e-4517-8910-0ef46172a190-kube-api-access-gvt26" (OuterVolumeSpecName: "kube-api-access-gvt26") pod "e4d5060c-9d3e-4517-8910-0ef46172a190" (UID: "e4d5060c-9d3e-4517-8910-0ef46172a190"). InnerVolumeSpecName "kube-api-access-gvt26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.779432 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-kube-api-access-l6g8d" (OuterVolumeSpecName: "kube-api-access-l6g8d") pod "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" (UID: "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9"). InnerVolumeSpecName "kube-api-access-l6g8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.790651 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"39aaa20cfc912fe2a76cc7fd2c3fd36d247b533ed6858de35949ffef11bc6d0f"} Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.792121 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xdqj7" event={"ID":"55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9","Type":"ContainerDied","Data":"247ce08d365649e22911febfe57c365bee476d703aed8113693a40675504a2ba"} Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.792152 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247ce08d365649e22911febfe57c365bee476d703aed8113693a40675504a2ba" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.792231 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xdqj7" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.794674 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.794834 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" event={"ID":"e4d5060c-9d3e-4517-8910-0ef46172a190","Type":"ContainerDied","Data":"468ad82f871f465e7dcd83cf379c4648a61f9dabcb1b65e566e7301bbfd5cc7e"} Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.794866 4922 scope.go:117] "RemoveContainer" containerID="fcb4e7b3fe98f08e98fe4a082932aff9e57a0602f7a965cbde94b4a0fd1e802a" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.794975 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb8c8997-s9zsx" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.802188 4922 generic.go:334] "Generic (PLEG): container finished" podID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerID="10bf5967a4ae2e989da515f21a43b102dcabe83ab29c09cc7577df2eb47f7643" exitCode=0 Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.802259 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b57a3e81-4196-42d2-ac31-1b772322f93d","Type":"ContainerDied","Data":"10bf5967a4ae2e989da515f21a43b102dcabe83ab29c09cc7577df2eb47f7643"} Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.803457 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "079ac494-9665-4c61-9ec5-47628d00d8bc" (UID: "079ac494-9665-4c61-9ec5-47628d00d8bc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.803640 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-scripts" (OuterVolumeSpecName: "scripts") pod "079ac494-9665-4c61-9ec5-47628d00d8bc" (UID: "079ac494-9665-4c61-9ec5-47628d00d8bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.820825 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdsxs" event={"ID":"079ac494-9665-4c61-9ec5-47628d00d8bc","Type":"ContainerDied","Data":"2b552dc47847c5f56f00613bbc4f3f776472ec7fb05be2995934b5c6eed8e5f5"} Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.820860 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b552dc47847c5f56f00613bbc4f3f776472ec7fb05be2995934b5c6eed8e5f5" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.820926 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdsxs" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831716 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvt26\" (UniqueName: \"kubernetes.io/projected/e4d5060c-9d3e-4517-8910-0ef46172a190-kube-api-access-gvt26\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831747 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831755 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831764 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxncd\" (UniqueName: \"kubernetes.io/projected/079ac494-9665-4c61-9ec5-47628d00d8bc-kube-api-access-lxncd\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831772 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831780 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6g8d\" (UniqueName: \"kubernetes.io/projected/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-kube-api-access-l6g8d\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831789 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.831796 4922 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.858926 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" (UID: "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.876217 4922 scope.go:117] "RemoveContainer" containerID="b2f8c13e0cd0bc594f4ceb5499ebb25a917383f3af656ceb5c5959464317b2cc" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.900074 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "079ac494-9665-4c61-9ec5-47628d00d8bc" (UID: "079ac494-9665-4c61-9ec5-47628d00d8bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.926995 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-config-data" (OuterVolumeSpecName: "config-data") pod "079ac494-9665-4c61-9ec5-47628d00d8bc" (UID: "079ac494-9665-4c61-9ec5-47628d00d8bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.933820 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-combined-ca-bundle\") pod \"3b7784db-1198-4bd4-bed0-da049559613b\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.933865 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl5s4\" (UniqueName: \"kubernetes.io/projected/41ec42e5-bf03-41f3-93cf-e18347511ed0-kube-api-access-fl5s4\") pod \"41ec42e5-bf03-41f3-93cf-e18347511ed0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.933908 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-config-data\") pod \"3b7784db-1198-4bd4-bed0-da049559613b\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.933949 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7784db-1198-4bd4-bed0-da049559613b-logs\") pod \"3b7784db-1198-4bd4-bed0-da049559613b\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.933996 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-combined-ca-bundle\") pod \"41ec42e5-bf03-41f3-93cf-e18347511ed0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934018 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fp2q\" (UniqueName: \"kubernetes.io/projected/3b7784db-1198-4bd4-bed0-da049559613b-kube-api-access-6fp2q\") pod \"3b7784db-1198-4bd4-bed0-da049559613b\" (UID: \"3b7784db-1198-4bd4-bed0-da049559613b\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934041 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-custom-prometheus-ca\") pod \"41ec42e5-bf03-41f3-93cf-e18347511ed0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934104 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-config-data\") pod \"41ec42e5-bf03-41f3-93cf-e18347511ed0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934135 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ec42e5-bf03-41f3-93cf-e18347511ed0-logs\") pod \"41ec42e5-bf03-41f3-93cf-e18347511ed0\" (UID: \"41ec42e5-bf03-41f3-93cf-e18347511ed0\") " Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934257 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-config-data" (OuterVolumeSpecName: "config-data") pod "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" (UID: "55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934490 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934507 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934517 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079ac494-9665-4c61-9ec5-47628d00d8bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.934526 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.935251 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4d5060c-9d3e-4517-8910-0ef46172a190" (UID: "e4d5060c-9d3e-4517-8910-0ef46172a190"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.935317 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7784db-1198-4bd4-bed0-da049559613b-logs" (OuterVolumeSpecName: "logs") pod "3b7784db-1198-4bd4-bed0-da049559613b" (UID: "3b7784db-1198-4bd4-bed0-da049559613b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.935759 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.937872 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ec42e5-bf03-41f3-93cf-e18347511ed0-logs" (OuterVolumeSpecName: "logs") pod "41ec42e5-bf03-41f3-93cf-e18347511ed0" (UID: "41ec42e5-bf03-41f3-93cf-e18347511ed0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.944272 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4d5060c-9d3e-4517-8910-0ef46172a190" (UID: "e4d5060c-9d3e-4517-8910-0ef46172a190"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:31 crc kubenswrapper[4922]: I0126 14:29:31.966933 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e4d5060c-9d3e-4517-8910-0ef46172a190" (UID: "e4d5060c-9d3e-4517-8910-0ef46172a190"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.004940 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7784db-1198-4bd4-bed0-da049559613b-kube-api-access-6fp2q" (OuterVolumeSpecName: "kube-api-access-6fp2q") pod "3b7784db-1198-4bd4-bed0-da049559613b" (UID: "3b7784db-1198-4bd4-bed0-da049559613b"). InnerVolumeSpecName "kube-api-access-6fp2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.006210 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ec42e5-bf03-41f3-93cf-e18347511ed0-kube-api-access-fl5s4" (OuterVolumeSpecName: "kube-api-access-fl5s4") pod "41ec42e5-bf03-41f3-93cf-e18347511ed0" (UID: "41ec42e5-bf03-41f3-93cf-e18347511ed0"). InnerVolumeSpecName "kube-api-access-fl5s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.029262 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41ec42e5-bf03-41f3-93cf-e18347511ed0" (UID: "41ec42e5-bf03-41f3-93cf-e18347511ed0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036263 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-custom-prometheus-ca\") pod \"b57a3e81-4196-42d2-ac31-1b772322f93d\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036321 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-combined-ca-bundle\") pod \"b57a3e81-4196-42d2-ac31-1b772322f93d\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036428 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b57a3e81-4196-42d2-ac31-1b772322f93d-logs\") pod \"b57a3e81-4196-42d2-ac31-1b772322f93d\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036473 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjzn2\" (UniqueName: \"kubernetes.io/projected/b57a3e81-4196-42d2-ac31-1b772322f93d-kube-api-access-mjzn2\") pod \"b57a3e81-4196-42d2-ac31-1b772322f93d\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036488 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-config-data\") pod \"b57a3e81-4196-42d2-ac31-1b772322f93d\" (UID: \"b57a3e81-4196-42d2-ac31-1b772322f93d\") " Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036933 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036951 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl5s4\" (UniqueName: \"kubernetes.io/projected/41ec42e5-bf03-41f3-93cf-e18347511ed0-kube-api-access-fl5s4\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036961 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036971 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7784db-1198-4bd4-bed0-da049559613b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036980 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036989 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fp2q\" (UniqueName: \"kubernetes.io/projected/3b7784db-1198-4bd4-bed0-da049559613b-kube-api-access-6fp2q\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.036999 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41ec42e5-bf03-41f3-93cf-e18347511ed0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.037008 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.037968 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b57a3e81-4196-42d2-ac31-1b772322f93d-logs" (OuterVolumeSpecName: "logs") pod "b57a3e81-4196-42d2-ac31-1b772322f93d" (UID: "b57a3e81-4196-42d2-ac31-1b772322f93d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.045967 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57a3e81-4196-42d2-ac31-1b772322f93d-kube-api-access-mjzn2" (OuterVolumeSpecName: "kube-api-access-mjzn2") pod "b57a3e81-4196-42d2-ac31-1b772322f93d" (UID: "b57a3e81-4196-42d2-ac31-1b772322f93d"). InnerVolumeSpecName "kube-api-access-mjzn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.047234 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b7784db-1198-4bd4-bed0-da049559613b" (UID: "3b7784db-1198-4bd4-bed0-da049559613b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.056175 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "41ec42e5-bf03-41f3-93cf-e18347511ed0" (UID: "41ec42e5-bf03-41f3-93cf-e18347511ed0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.059750 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-config" (OuterVolumeSpecName: "config") pod "e4d5060c-9d3e-4517-8910-0ef46172a190" (UID: "e4d5060c-9d3e-4517-8910-0ef46172a190"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.079407 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-config-data" (OuterVolumeSpecName: "config-data") pod "3b7784db-1198-4bd4-bed0-da049559613b" (UID: "3b7784db-1198-4bd4-bed0-da049559613b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.109633 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b57a3e81-4196-42d2-ac31-1b772322f93d" (UID: "b57a3e81-4196-42d2-ac31-1b772322f93d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.117538 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-config-data" (OuterVolumeSpecName: "config-data") pod "b57a3e81-4196-42d2-ac31-1b772322f93d" (UID: "b57a3e81-4196-42d2-ac31-1b772322f93d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.118476 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b57a3e81-4196-42d2-ac31-1b772322f93d" (UID: "b57a3e81-4196-42d2-ac31-1b772322f93d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140400 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d5060c-9d3e-4517-8910-0ef46172a190-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140434 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140447 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140456 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140464 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7784db-1198-4bd4-bed0-da049559613b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140474 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b57a3e81-4196-42d2-ac31-1b772322f93d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140482 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjzn2\" (UniqueName: \"kubernetes.io/projected/b57a3e81-4196-42d2-ac31-1b772322f93d-kube-api-access-mjzn2\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140491 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b57a3e81-4196-42d2-ac31-1b772322f93d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.140499 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.145835 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb8c8997-s9zsx"] Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.152176 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-config-data" (OuterVolumeSpecName: "config-data") pod "41ec42e5-bf03-41f3-93cf-e18347511ed0" (UID: "41ec42e5-bf03-41f3-93cf-e18347511ed0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.153592 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb8c8997-s9zsx"] Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.244188 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ec42e5-bf03-41f3-93cf-e18347511ed0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.787801 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7cd8b7c676-rg4sd"] Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788497 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788515 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788530 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ec42e5-bf03-41f3-93cf-e18347511ed0" containerName="watcher-decision-engine" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788536 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ec42e5-bf03-41f3-93cf-e18347511ed0" containerName="watcher-decision-engine" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788546 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="init" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788552 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="init" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788569 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7784db-1198-4bd4-bed0-da049559613b" containerName="watcher-applier" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788574 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7784db-1198-4bd4-bed0-da049559613b" containerName="watcher-applier" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788585 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079ac494-9665-4c61-9ec5-47628d00d8bc" containerName="keystone-bootstrap" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788592 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="079ac494-9665-4c61-9ec5-47628d00d8bc" containerName="keystone-bootstrap" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788606 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" containerName="placement-db-sync" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788612 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" containerName="placement-db-sync" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788623 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api-log" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788628 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api-log" Jan 26 14:29:32 crc kubenswrapper[4922]: E0126 14:29:32.788637 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="dnsmasq-dns" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788642 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="dnsmasq-dns" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788797 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="079ac494-9665-4c61-9ec5-47628d00d8bc" containerName="keystone-bootstrap" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788817 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api-log" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788824 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7784db-1198-4bd4-bed0-da049559613b" containerName="watcher-applier" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788838 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" containerName="dnsmasq-dns" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788850 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" containerName="watcher-api" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788864 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" containerName="placement-db-sync" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.788874 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ec42e5-bf03-41f3-93cf-e18347511ed0" containerName="watcher-decision-engine" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.789505 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.792213 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.792467 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.792640 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.792807 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j5sp8" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.792988 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.793147 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.816467 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cd8b7c676-rg4sd"] Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.825541 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c4549854d-f2kpv"] Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.827259 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.834249 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c4549854d-f2kpv"] Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.867268 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.867447 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.867545 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2wlz5" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.867664 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.867771 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.916940 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.917119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"41ec42e5-bf03-41f3-93cf-e18347511ed0","Type":"ContainerDied","Data":"405bd8a5749fd02b1737ce22270c1ed426e4db152a57b2cd6caf0ef3c95ac7c6"} Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.918665 4922 scope.go:117] "RemoveContainer" containerID="815bf6b074d13cc05b5cba0cf155b88738b12800c13b8dc53fdf0c2d875bc578" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.920471 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b57a3e81-4196-42d2-ac31-1b772322f93d","Type":"ContainerDied","Data":"6870e546f925284f6b27a837141378136d226f1c81ce628044bf846069606b06"} Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.920537 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.929674 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"3b7784db-1198-4bd4-bed0-da049559613b","Type":"ContainerDied","Data":"77d287c8fd02e126d5db68498dccd68ac34698520dcb57d967466851ef6adc47"} Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.929757 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.942145 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"e7b46c45ca721f090c49be2707adc629f79213cd73e0e4f77ebca9a5f6ebc8e9"} Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.942183 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03d225b5-5466-45de-9417-54a11fa79429","Type":"ContainerStarted","Data":"48ad7ee4d68d3beb0dd18c3af01bbab354e789e4715abc6b84f2dfda3f389daf"} Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.943642 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvk6w" event={"ID":"91754680-73d8-4c72-a7bd-834959e192a1","Type":"ContainerStarted","Data":"fd62fad13f01dcbbf639bc9ff71f574f92522f976a084b2bb925a733ab522f31"} Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.954058 4922 scope.go:117] "RemoveContainer" containerID="10bf5967a4ae2e989da515f21a43b102dcabe83ab29c09cc7577df2eb47f7643" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.976690 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-fernet-keys\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.976740 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75199453-47fb-4d94-ae1d-908c20b64cfd-logs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.977007 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-credential-keys\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.977115 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-public-tls-certs\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.977991 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-config-data\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978025 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-config-data\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978083 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-combined-ca-bundle\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978155 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsqh\" (UniqueName: \"kubernetes.io/projected/75199453-47fb-4d94-ae1d-908c20b64cfd-kube-api-access-zpsqh\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978203 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-internal-tls-certs\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978281 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzkq2\" (UniqueName: \"kubernetes.io/projected/aead8c46-9f8b-45dd-9561-b320c5c7bde4-kube-api-access-lzkq2\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978304 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-public-tls-certs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978334 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-scripts\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978356 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-scripts\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978374 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-combined-ca-bundle\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.978410 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-internal-tls-certs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:32 crc kubenswrapper[4922]: I0126 14:29:32.984936 4922 scope.go:117] "RemoveContainer" containerID="1315765da1810dedcf08ad9e84bf6e5a3b8d7b0f3576e93aaf583e8d1063f23c" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.006084 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=79.74093658 podStartE2EDuration="2m14.006050929s" podCreationTimestamp="2026-01-26 14:27:19 +0000 UTC" firstStartedPulling="2026-01-26 14:28:24.856486886 +0000 UTC m=+1122.058749658" lastFinishedPulling="2026-01-26 14:29:19.121601235 +0000 UTC m=+1176.323864007" observedRunningTime="2026-01-26 14:29:32.989574865 +0000 UTC m=+1190.191837657" watchObservedRunningTime="2026-01-26 14:29:33.006050929 +0000 UTC m=+1190.208313701" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.012166 4922 scope.go:117] "RemoveContainer" containerID="e043fb5a5fcc8439a4563889d3c52f42d8e87fce4e5235a0a1892ab2dfcc1da3" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.069935 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rvk6w" podStartSLOduration=3.626759957 podStartE2EDuration="56.069912197s" podCreationTimestamp="2026-01-26 14:28:37 +0000 UTC" firstStartedPulling="2026-01-26 14:28:38.951753568 +0000 UTC m=+1136.154016340" lastFinishedPulling="2026-01-26 14:29:31.394905808 +0000 UTC m=+1188.597168580" observedRunningTime="2026-01-26 14:29:33.02346571 +0000 UTC m=+1190.225728502" watchObservedRunningTime="2026-01-26 14:29:33.069912197 +0000 UTC m=+1190.272174969" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085144 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-internal-tls-certs\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085239 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzkq2\" (UniqueName: \"kubernetes.io/projected/aead8c46-9f8b-45dd-9561-b320c5c7bde4-kube-api-access-lzkq2\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085259 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-public-tls-certs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-scripts\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085308 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-scripts\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085324 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-combined-ca-bundle\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085352 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-internal-tls-certs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085381 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-fernet-keys\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085398 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75199453-47fb-4d94-ae1d-908c20b64cfd-logs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085427 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-credential-keys\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085484 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-public-tls-certs\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085501 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-config-data\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085518 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-config-data\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085536 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-combined-ca-bundle\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.085567 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsqh\" (UniqueName: \"kubernetes.io/projected/75199453-47fb-4d94-ae1d-908c20b64cfd-kube-api-access-zpsqh\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.094191 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.096152 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-internal-tls-certs\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.102687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-internal-tls-certs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.103216 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75199453-47fb-4d94-ae1d-908c20b64cfd-logs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.139385 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-combined-ca-bundle\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.141193 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-scripts\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.143008 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-scripts\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.143715 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsqh\" (UniqueName: \"kubernetes.io/projected/75199453-47fb-4d94-ae1d-908c20b64cfd-kube-api-access-zpsqh\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.143818 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-public-tls-certs\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.144578 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-credential-keys\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.144631 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-public-tls-certs\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.146731 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzkq2\" (UniqueName: \"kubernetes.io/projected/aead8c46-9f8b-45dd-9561-b320c5c7bde4-kube-api-access-lzkq2\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.148918 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-fernet-keys\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.153452 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-combined-ca-bundle\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.155318 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75199453-47fb-4d94-ae1d-908c20b64cfd-config-data\") pod \"placement-5c4549854d-f2kpv\" (UID: \"75199453-47fb-4d94-ae1d-908c20b64cfd\") " pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.155329 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aead8c46-9f8b-45dd-9561-b320c5c7bde4-config-data\") pod \"keystone-7cd8b7c676-rg4sd\" (UID: \"aead8c46-9f8b-45dd-9561-b320c5c7bde4\") " pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.174335 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d5060c-9d3e-4517-8910-0ef46172a190" path="/var/lib/kubelet/pods/e4d5060c-9d3e-4517-8910-0ef46172a190/volumes" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.174967 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.174996 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.175009 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.187486 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.188890 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.190567 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-qs5db" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.190941 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.192737 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.193280 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.202190 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.215218 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.220704 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.224108 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.229659 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.232598 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.236146 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.238092 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.241152 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.241434 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.241573 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.268334 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.289476 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.289587 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.291360 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.291557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.291644 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1678095e-0a1d-4199-90c6-ea3afc879e0b-logs\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.291943 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfp28\" (UniqueName: \"kubernetes.io/projected/1678095e-0a1d-4199-90c6-ea3afc879e0b-kube-api-access-cfp28\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.363149 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-687d55ffd9-pqcz2"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.369651 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.372690 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.375951 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-687d55ffd9-pqcz2"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393753 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw92x\" (UniqueName: \"kubernetes.io/projected/6d5cf795-cb42-4d01-8121-5ef71cedd729-kube-api-access-vw92x\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393794 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szf9g\" (UniqueName: \"kubernetes.io/projected/7b5e0a69-30c9-435f-a566-b97de4e1b850-kube-api-access-szf9g\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393844 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfp28\" (UniqueName: \"kubernetes.io/projected/1678095e-0a1d-4199-90c6-ea3afc879e0b-kube-api-access-cfp28\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393870 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5cf795-cb42-4d01-8121-5ef71cedd729-config-data\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393886 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393957 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5cf795-cb42-4d01-8121-5ef71cedd729-logs\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.393991 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394037 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-config-data\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394110 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394154 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394185 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394202 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1678095e-0a1d-4199-90c6-ea3afc879e0b-logs\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394224 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5cf795-cb42-4d01-8121-5ef71cedd729-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394306 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394377 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5e0a69-30c9-435f-a566-b97de4e1b850-logs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.394441 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.395643 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1678095e-0a1d-4199-90c6-ea3afc879e0b-logs\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.400025 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.400390 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.400923 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.425615 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfp28\" (UniqueName: \"kubernetes.io/projected/1678095e-0a1d-4199-90c6-ea3afc879e0b-kube-api-access-cfp28\") pod \"watcher-decision-engine-0\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496299 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7lpj\" (UniqueName: \"kubernetes.io/projected/112a29db-e714-4142-a2f5-d094c71ee22a-kube-api-access-q7lpj\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496680 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5cf795-cb42-4d01-8121-5ef71cedd729-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496719 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-swift-storage-0\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496778 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496827 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5e0a69-30c9-435f-a566-b97de4e1b850-logs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496863 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-config\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496887 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496928 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-nb\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496953 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw92x\" (UniqueName: \"kubernetes.io/projected/6d5cf795-cb42-4d01-8121-5ef71cedd729-kube-api-access-vw92x\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.496976 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szf9g\" (UniqueName: \"kubernetes.io/projected/7b5e0a69-30c9-435f-a566-b97de4e1b850-kube-api-access-szf9g\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497011 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5cf795-cb42-4d01-8121-5ef71cedd729-config-data\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497032 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497085 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-svc\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497111 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5cf795-cb42-4d01-8121-5ef71cedd729-logs\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497139 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-sb\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-config-data\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.497199 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.498378 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b5e0a69-30c9-435f-a566-b97de4e1b850-logs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.500712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5cf795-cb42-4d01-8121-5ef71cedd729-logs\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.505164 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5cf795-cb42-4d01-8121-5ef71cedd729-config-data\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.506422 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-config-data\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.513439 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.515610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.517968 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.519536 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5cf795-cb42-4d01-8121-5ef71cedd729-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.520682 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw92x\" (UniqueName: \"kubernetes.io/projected/6d5cf795-cb42-4d01-8121-5ef71cedd729-kube-api-access-vw92x\") pod \"watcher-applier-0\" (UID: \"6d5cf795-cb42-4d01-8121-5ef71cedd729\") " pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.521142 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szf9g\" (UniqueName: \"kubernetes.io/projected/7b5e0a69-30c9-435f-a566-b97de4e1b850-kube-api-access-szf9g\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.527886 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7b5e0a69-30c9-435f-a566-b97de4e1b850-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7b5e0a69-30c9-435f-a566-b97de4e1b850\") " pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.553252 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.592175 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.599281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-config\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.599344 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-nb\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.599389 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-svc\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.599416 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-sb\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.599492 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7lpj\" (UniqueName: \"kubernetes.io/projected/112a29db-e714-4142-a2f5-d094c71ee22a-kube-api-access-q7lpj\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.599519 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-swift-storage-0\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.600459 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-swift-storage-0\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.601023 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-config\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.601595 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-nb\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.602185 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-svc\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.602600 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-sb\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.602925 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.622271 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7lpj\" (UniqueName: \"kubernetes.io/projected/112a29db-e714-4142-a2f5-d094c71ee22a-kube-api-access-q7lpj\") pod \"dnsmasq-dns-687d55ffd9-pqcz2\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.848824 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.862338 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c4549854d-f2kpv"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.892500 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cd8b7c676-rg4sd"] Jan 26 14:29:33 crc kubenswrapper[4922]: I0126 14:29:33.989367 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c4549854d-f2kpv" event={"ID":"75199453-47fb-4d94-ae1d-908c20b64cfd","Type":"ContainerStarted","Data":"1898dc4132c0dcafc2de6e0ccb1d8fa101290c89f52715e41828dd66b33ad330"} Jan 26 14:29:34 crc kubenswrapper[4922]: I0126 14:29:34.267040 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 14:29:34 crc kubenswrapper[4922]: I0126 14:29:34.442528 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 14:29:34 crc kubenswrapper[4922]: I0126 14:29:34.456361 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:29:34 crc kubenswrapper[4922]: W0126 14:29:34.583128 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod112a29db_e714_4142_a2f5_d094c71ee22a.slice/crio-f0e8beeb98ed1e4bdd8456414b847e84528485bfce8f1711b8bd6d1128051a4d WatchSource:0}: Error finding container f0e8beeb98ed1e4bdd8456414b847e84528485bfce8f1711b8bd6d1128051a4d: Status 404 returned error can't find the container with id f0e8beeb98ed1e4bdd8456414b847e84528485bfce8f1711b8bd6d1128051a4d Jan 26 14:29:34 crc kubenswrapper[4922]: I0126 14:29:34.584758 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-687d55ffd9-pqcz2"] Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.035593 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b5e0a69-30c9-435f-a566-b97de4e1b850","Type":"ContainerStarted","Data":"3e9b418f04f54ae82c94612400d16ef657f1061770853c16ee9116eb403357c0"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.035878 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b5e0a69-30c9-435f-a566-b97de4e1b850","Type":"ContainerStarted","Data":"5a56c92f9d0dcd55cb8c4e896948d47500ae83843340c2841254648683ee187a"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.049537 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerStarted","Data":"350f4d05df242525414822dc717fc758cb1f5d15b98114d8bf47543dca68a9ea"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.049591 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerStarted","Data":"02c1fbe28400642a12b950957b5ab56000f825977cf38a17977628f451180fc2"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.053120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cd8b7c676-rg4sd" event={"ID":"aead8c46-9f8b-45dd-9561-b320c5c7bde4","Type":"ContainerStarted","Data":"8762f1861e3cda4cd56246f2a5aa11c7d1f932efb33d37ce1a4b6745e675014f"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.053165 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cd8b7c676-rg4sd" event={"ID":"aead8c46-9f8b-45dd-9561-b320c5c7bde4","Type":"ContainerStarted","Data":"cbb1e134b92ff0c552543638f76ad48c6b393f4d4a37d80b15accc197b6d603a"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.053346 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.060138 4922 generic.go:334] "Generic (PLEG): container finished" podID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" containerID="12325cb746235a5035dcbb0b5e62405626e7dd4ecbfbf27aa59d5e9353bf71a8" exitCode=0 Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.060220 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nzdsh" event={"ID":"64dc8567-a56e-4cf4-8155-5b06c405f7ba","Type":"ContainerDied","Data":"12325cb746235a5035dcbb0b5e62405626e7dd4ecbfbf27aa59d5e9353bf71a8"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.068045 4922 generic.go:334] "Generic (PLEG): container finished" podID="112a29db-e714-4142-a2f5-d094c71ee22a" containerID="9938c5908ce7a4d834bbe2bfdc7dce21cc8c0753522d09cd63b9aba4f67b8534" exitCode=0 Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.068138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" event={"ID":"112a29db-e714-4142-a2f5-d094c71ee22a","Type":"ContainerDied","Data":"9938c5908ce7a4d834bbe2bfdc7dce21cc8c0753522d09cd63b9aba4f67b8534"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.068160 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" event={"ID":"112a29db-e714-4142-a2f5-d094c71ee22a","Type":"ContainerStarted","Data":"f0e8beeb98ed1e4bdd8456414b847e84528485bfce8f1711b8bd6d1128051a4d"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.073184 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.07317017 podStartE2EDuration="2.07317017s" podCreationTimestamp="2026-01-26 14:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:35.062562722 +0000 UTC m=+1192.264825494" watchObservedRunningTime="2026-01-26 14:29:35.07317017 +0000 UTC m=+1192.275432943" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.073831 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c4549854d-f2kpv" event={"ID":"75199453-47fb-4d94-ae1d-908c20b64cfd","Type":"ContainerStarted","Data":"263992d24b3b56bee7082a340f9cfa7f02f0885ba745e9290dd956c9feffb32b"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.075397 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6d5cf795-cb42-4d01-8121-5ef71cedd729","Type":"ContainerStarted","Data":"8bb3b916125f861ba586f6fc277e563b97e0cdad34b3649ca8d4808c4056752d"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.075424 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6d5cf795-cb42-4d01-8121-5ef71cedd729","Type":"ContainerStarted","Data":"f056626284bee3b61de9aac7aab8af6ef9a1681e3d982a4977099d66a722bea7"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.078273 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt6mt" event={"ID":"99c8b640-ac97-4a3e-8e4c-1781bd756396","Type":"ContainerStarted","Data":"a04ecedb0937b481c79df65e9f38d469fe0b6a913ee51d8b391e1e0fd2332851"} Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.104651 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7cd8b7c676-rg4sd" podStartSLOduration=3.104630726 podStartE2EDuration="3.104630726s" podCreationTimestamp="2026-01-26 14:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:35.102773165 +0000 UTC m=+1192.305035947" watchObservedRunningTime="2026-01-26 14:29:35.104630726 +0000 UTC m=+1192.306893498" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.113931 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7784db-1198-4bd4-bed0-da049559613b" path="/var/lib/kubelet/pods/3b7784db-1198-4bd4-bed0-da049559613b/volumes" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.114717 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ec42e5-bf03-41f3-93cf-e18347511ed0" path="/var/lib/kubelet/pods/41ec42e5-bf03-41f3-93cf-e18347511ed0/volumes" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.115609 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57a3e81-4196-42d2-ac31-1b772322f93d" path="/var/lib/kubelet/pods/b57a3e81-4196-42d2-ac31-1b772322f93d/volumes" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.155367 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.155348416 podStartE2EDuration="2.155348416s" podCreationTimestamp="2026-01-26 14:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:35.132869953 +0000 UTC m=+1192.335132725" watchObservedRunningTime="2026-01-26 14:29:35.155348416 +0000 UTC m=+1192.357611188" Jan 26 14:29:35 crc kubenswrapper[4922]: I0126 14:29:35.195175 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-lt6mt" podStartSLOduration=5.449832164 podStartE2EDuration="1m19.195149766s" podCreationTimestamp="2026-01-26 14:28:16 +0000 UTC" firstStartedPulling="2026-01-26 14:28:19.610945639 +0000 UTC m=+1116.813208411" lastFinishedPulling="2026-01-26 14:29:33.356263241 +0000 UTC m=+1190.558526013" observedRunningTime="2026-01-26 14:29:35.177280282 +0000 UTC m=+1192.379543054" watchObservedRunningTime="2026-01-26 14:29:35.195149766 +0000 UTC m=+1192.397412568" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.091411 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" event={"ID":"112a29db-e714-4142-a2f5-d094c71ee22a","Type":"ContainerStarted","Data":"8fe9236217ac07171eb9781ddeab040f93f4cc2d342103141f61213faf13b3f9"} Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.095255 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c4549854d-f2kpv" event={"ID":"75199453-47fb-4d94-ae1d-908c20b64cfd","Type":"ContainerStarted","Data":"2433d3f104bb941799f46121b6f6d081427213b2f29a6017cb1a8e365f7d1fe2"} Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.095423 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.098909 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7b5e0a69-30c9-435f-a566-b97de4e1b850","Type":"ContainerStarted","Data":"0df9b29c635b1347700bf83fd1698cfa700858d639dad39225697f07a6fbbbbc"} Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.098957 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.124655 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" podStartSLOduration=3.124634121 podStartE2EDuration="3.124634121s" podCreationTimestamp="2026-01-26 14:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:36.108841566 +0000 UTC m=+1193.311104348" watchObservedRunningTime="2026-01-26 14:29:36.124634121 +0000 UTC m=+1193.326896893" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.149938 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.149914423 podStartE2EDuration="3.149914423s" podCreationTimestamp="2026-01-26 14:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:36.12777255 +0000 UTC m=+1193.330035332" watchObservedRunningTime="2026-01-26 14:29:36.149914423 +0000 UTC m=+1193.352177195" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.161783 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c4549854d-f2kpv" podStartSLOduration=4.161762217 podStartE2EDuration="4.161762217s" podCreationTimestamp="2026-01-26 14:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:36.149231053 +0000 UTC m=+1193.351493825" watchObservedRunningTime="2026-01-26 14:29:36.161762217 +0000 UTC m=+1193.364024989" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.495879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.582659 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-db-sync-config-data\") pod \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.583118 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzxhf\" (UniqueName: \"kubernetes.io/projected/64dc8567-a56e-4cf4-8155-5b06c405f7ba-kube-api-access-pzxhf\") pod \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.583146 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-combined-ca-bundle\") pod \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\" (UID: \"64dc8567-a56e-4cf4-8155-5b06c405f7ba\") " Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.589244 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64dc8567-a56e-4cf4-8155-5b06c405f7ba-kube-api-access-pzxhf" (OuterVolumeSpecName: "kube-api-access-pzxhf") pod "64dc8567-a56e-4cf4-8155-5b06c405f7ba" (UID: "64dc8567-a56e-4cf4-8155-5b06c405f7ba"). InnerVolumeSpecName "kube-api-access-pzxhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.602731 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64dc8567-a56e-4cf4-8155-5b06c405f7ba" (UID: "64dc8567-a56e-4cf4-8155-5b06c405f7ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.617175 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64dc8567-a56e-4cf4-8155-5b06c405f7ba" (UID: "64dc8567-a56e-4cf4-8155-5b06c405f7ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.684563 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.684797 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzxhf\" (UniqueName: \"kubernetes.io/projected/64dc8567-a56e-4cf4-8155-5b06c405f7ba-kube-api-access-pzxhf\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:36 crc kubenswrapper[4922]: I0126 14:29:36.684853 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64dc8567-a56e-4cf4-8155-5b06c405f7ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.117489 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nzdsh" event={"ID":"64dc8567-a56e-4cf4-8155-5b06c405f7ba","Type":"ContainerDied","Data":"94c6a58e0d7f55fe6a4d18fb19488a7bd43f0b839bf7e8e05cb53d211f54f8a6"} Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.117533 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c6a58e0d7f55fe6a4d18fb19488a7bd43f0b839bf7e8e05cb53d211f54f8a6" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.117612 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nzdsh" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.118789 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.118870 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.325362 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6bff9847b7-bs5nf"] Jan 26 14:29:37 crc kubenswrapper[4922]: E0126 14:29:37.325711 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" containerName="barbican-db-sync" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.325727 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" containerName="barbican-db-sync" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.325903 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" containerName="barbican-db-sync" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.327342 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.334473 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.334703 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.334877 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9cr42" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.379550 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bff9847b7-bs5nf"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.400218 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.401345 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857d24cb-db5e-45ef-9b8a-025ee81b0083-logs\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.401407 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-config-data\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.401473 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnbvm\" (UniqueName: \"kubernetes.io/projected/857d24cb-db5e-45ef-9b8a-025ee81b0083-kube-api-access-xnbvm\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.401545 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-combined-ca-bundle\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.401573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-config-data-custom\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.401982 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.407696 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.420708 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.469295 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-687d55ffd9-pqcz2"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.499891 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74ddf65b4f-8lxm8"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.501421 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.502903 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-combined-ca-bundle\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.502941 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-combined-ca-bundle\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.502967 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-config-data-custom\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503005 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857d24cb-db5e-45ef-9b8a-025ee81b0083-logs\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503027 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ec145d-ce40-473f-8598-dbf02d89cc44-logs\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-config-data\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503086 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-config-data-custom\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-config-data\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503146 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4zj\" (UniqueName: \"kubernetes.io/projected/94ec145d-ce40-473f-8598-dbf02d89cc44-kube-api-access-6d4zj\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.503170 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnbvm\" (UniqueName: \"kubernetes.io/projected/857d24cb-db5e-45ef-9b8a-025ee81b0083-kube-api-access-xnbvm\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.504474 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857d24cb-db5e-45ef-9b8a-025ee81b0083-logs\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.510909 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-combined-ca-bundle\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.511197 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-config-data\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.511418 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/857d24cb-db5e-45ef-9b8a-025ee81b0083-config-data-custom\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.529129 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74ddf65b4f-8lxm8"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.556163 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnbvm\" (UniqueName: \"kubernetes.io/projected/857d24cb-db5e-45ef-9b8a-025ee81b0083-kube-api-access-xnbvm\") pod \"barbican-worker-6bff9847b7-bs5nf\" (UID: \"857d24cb-db5e-45ef-9b8a-025ee81b0083\") " pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.600148 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b778d8db6-87lbq"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615016 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-combined-ca-bundle\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615146 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ec145d-ce40-473f-8598-dbf02d89cc44-logs\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615171 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-swift-storage-0\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615197 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-svc\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615229 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-config-data-custom\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615254 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-nb\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615292 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-config\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615320 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-config-data\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615340 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4zj\" (UniqueName: \"kubernetes.io/projected/94ec145d-ce40-473f-8598-dbf02d89cc44-kube-api-access-6d4zj\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615362 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrkth\" (UniqueName: \"kubernetes.io/projected/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-kube-api-access-mrkth\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.615387 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-sb\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.616292 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ec145d-ce40-473f-8598-dbf02d89cc44-logs\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.623801 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-config-data-custom\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.625184 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.627696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-combined-ca-bundle\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.629457 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ec145d-ce40-473f-8598-dbf02d89cc44-config-data\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.629903 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.640793 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4zj\" (UniqueName: \"kubernetes.io/projected/94ec145d-ce40-473f-8598-dbf02d89cc44-kube-api-access-6d4zj\") pod \"barbican-keystone-listener-7f88dc4bbb-qt8fk\" (UID: \"94ec145d-ce40-473f-8598-dbf02d89cc44\") " pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.642598 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b778d8db6-87lbq"] Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.665520 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bff9847b7-bs5nf" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719041 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719141 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-swift-storage-0\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719168 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-combined-ca-bundle\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719186 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-svc\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719213 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf6g\" (UniqueName: \"kubernetes.io/projected/cd13bb86-5407-4a3b-b563-469791214577-kube-api-access-lvf6g\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719240 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd13bb86-5407-4a3b-b563-469791214577-logs\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-nb\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-config\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719319 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrkth\" (UniqueName: \"kubernetes.io/projected/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-kube-api-access-mrkth\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-sb\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.719381 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data-custom\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.720616 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-svc\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.721224 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-nb\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.721371 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-sb\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.721718 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-config\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.722479 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-swift-storage-0\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.740035 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.740450 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrkth\" (UniqueName: \"kubernetes.io/projected/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-kube-api-access-mrkth\") pod \"dnsmasq-dns-74ddf65b4f-8lxm8\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.756826 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.822472 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd13bb86-5407-4a3b-b563-469791214577-logs\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.822823 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd13bb86-5407-4a3b-b563-469791214577-logs\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.823428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data-custom\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.823576 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.823646 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-combined-ca-bundle\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.823697 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf6g\" (UniqueName: \"kubernetes.io/projected/cd13bb86-5407-4a3b-b563-469791214577-kube-api-access-lvf6g\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.829805 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.830665 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-combined-ca-bundle\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.832097 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data-custom\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:37 crc kubenswrapper[4922]: I0126 14:29:37.847058 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf6g\" (UniqueName: \"kubernetes.io/projected/cd13bb86-5407-4a3b-b563-469791214577-kube-api-access-lvf6g\") pod \"barbican-api-6b778d8db6-87lbq\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.068257 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.236507 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bff9847b7-bs5nf"] Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.409327 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk"] Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.480656 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74ddf65b4f-8lxm8"] Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.593472 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.603701 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.603786 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:29:38 crc kubenswrapper[4922]: I0126 14:29:38.778589 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 14:29:39 crc kubenswrapper[4922]: I0126 14:29:39.104645 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:29:39 crc kubenswrapper[4922]: I0126 14:29:39.150519 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" containerName="dnsmasq-dns" containerID="cri-o://8fe9236217ac07171eb9781ddeab040f93f4cc2d342103141f61213faf13b3f9" gracePeriod=10 Jan 26 14:29:39 crc kubenswrapper[4922]: I0126 14:29:39.356656 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.162411 4922 generic.go:334] "Generic (PLEG): container finished" podID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerID="350f4d05df242525414822dc717fc758cb1f5d15b98114d8bf47543dca68a9ea" exitCode=1 Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.162477 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerDied","Data":"350f4d05df242525414822dc717fc758cb1f5d15b98114d8bf47543dca68a9ea"} Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.163475 4922 scope.go:117] "RemoveContainer" containerID="350f4d05df242525414822dc717fc758cb1f5d15b98114d8bf47543dca68a9ea" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.190260 4922 generic.go:334] "Generic (PLEG): container finished" podID="112a29db-e714-4142-a2f5-d094c71ee22a" containerID="8fe9236217ac07171eb9781ddeab040f93f4cc2d342103141f61213faf13b3f9" exitCode=0 Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.190306 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" event={"ID":"112a29db-e714-4142-a2f5-d094c71ee22a","Type":"ContainerDied","Data":"8fe9236217ac07171eb9781ddeab040f93f4cc2d342103141f61213faf13b3f9"} Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.288908 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75bc76c88b-b6znr"] Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.290675 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.295756 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.298004 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.309551 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75bc76c88b-b6znr"] Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398119 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-logs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398198 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbbn\" (UniqueName: \"kubernetes.io/projected/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-kube-api-access-8nbbn\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398316 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-internal-tls-certs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398353 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-public-tls-certs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398403 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-combined-ca-bundle\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398467 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-config-data\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.398488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-config-data-custom\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500443 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-combined-ca-bundle\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500526 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-config-data\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500544 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-config-data-custom\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500583 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-logs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500618 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nbbn\" (UniqueName: \"kubernetes.io/projected/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-kube-api-access-8nbbn\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500680 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-internal-tls-certs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.500701 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-public-tls-certs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.503016 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-logs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.507210 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-combined-ca-bundle\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.508993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-config-data-custom\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.509981 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-config-data\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.510366 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-public-tls-certs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.510515 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-internal-tls-certs\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.518633 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nbbn\" (UniqueName: \"kubernetes.io/projected/5ed1bf50-3aec-40cc-843f-afe6a0b2027d-kube-api-access-8nbbn\") pod \"barbican-api-75bc76c88b-b6znr\" (UID: \"5ed1bf50-3aec-40cc-843f-afe6a0b2027d\") " pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:40 crc kubenswrapper[4922]: I0126 14:29:40.610814 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:41 crc kubenswrapper[4922]: I0126 14:29:41.031019 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:29:41 crc kubenswrapper[4922]: I0126 14:29:41.175295 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6c779658fd-pldff" Jan 26 14:29:41 crc kubenswrapper[4922]: I0126 14:29:41.230660 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b4f749b44-2qdw7"] Jan 26 14:29:41 crc kubenswrapper[4922]: I0126 14:29:41.230851 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b4f749b44-2qdw7" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon-log" containerID="cri-o://c2474915dcbbbd3a768fade598d9d0b2e8243b6212b803776364f82a9316297c" gracePeriod=30 Jan 26 14:29:41 crc kubenswrapper[4922]: I0126 14:29:41.231257 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b4f749b44-2qdw7" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" containerID="cri-o://0142e706f07caaac11da2ed37cb72515ef4a009fdcfce539c86464a124266490" gracePeriod=30 Jan 26 14:29:42 crc kubenswrapper[4922]: I0126 14:29:42.210342 4922 generic.go:334] "Generic (PLEG): container finished" podID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerID="0142e706f07caaac11da2ed37cb72515ef4a009fdcfce539c86464a124266490" exitCode=0 Jan 26 14:29:42 crc kubenswrapper[4922]: I0126 14:29:42.210384 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b4f749b44-2qdw7" event={"ID":"9bccd630-51ec-481b-97c6-1f2757dfc685","Type":"ContainerDied","Data":"0142e706f07caaac11da2ed37cb72515ef4a009fdcfce539c86464a124266490"} Jan 26 14:29:42 crc kubenswrapper[4922]: W0126 14:29:42.581732 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4798bf8_f62b_4f88_8a42_4f33ee79eeaa.slice/crio-2a030e9015fc6d4fe69a23fa0a1f2db1457e860ec8d03c5f39d809c26c032ef0 WatchSource:0}: Error finding container 2a030e9015fc6d4fe69a23fa0a1f2db1457e860ec8d03c5f39d809c26c032ef0: Status 404 returned error can't find the container with id 2a030e9015fc6d4fe69a23fa0a1f2db1457e860ec8d03c5f39d809c26c032ef0 Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.225177 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" event={"ID":"94ec145d-ce40-473f-8598-dbf02d89cc44","Type":"ContainerStarted","Data":"1530fa0d393415c4378de743afecc7e7a0d61805400e5f408e721ecfa72d7e9e"} Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.227252 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bff9847b7-bs5nf" event={"ID":"857d24cb-db5e-45ef-9b8a-025ee81b0083","Type":"ContainerStarted","Data":"77d6f1ce7ed6caed5f55a7f88fc3198f650e4a11d3ad6887ca25c7359613e169"} Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.229998 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" event={"ID":"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa","Type":"ContainerStarted","Data":"2a030e9015fc6d4fe69a23fa0a1f2db1457e860ec8d03c5f39d809c26c032ef0"} Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.235047 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" event={"ID":"112a29db-e714-4142-a2f5-d094c71ee22a","Type":"ContainerDied","Data":"f0e8beeb98ed1e4bdd8456414b847e84528485bfce8f1711b8bd6d1128051a4d"} Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.235100 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e8beeb98ed1e4bdd8456414b847e84528485bfce8f1711b8bd6d1128051a4d" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.277263 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.355203 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-svc\") pod \"112a29db-e714-4142-a2f5-d094c71ee22a\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.356226 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-nb\") pod \"112a29db-e714-4142-a2f5-d094c71ee22a\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.356345 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-config\") pod \"112a29db-e714-4142-a2f5-d094c71ee22a\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.356415 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-swift-storage-0\") pod \"112a29db-e714-4142-a2f5-d094c71ee22a\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.356495 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-sb\") pod \"112a29db-e714-4142-a2f5-d094c71ee22a\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.357193 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7lpj\" (UniqueName: \"kubernetes.io/projected/112a29db-e714-4142-a2f5-d094c71ee22a-kube-api-access-q7lpj\") pod \"112a29db-e714-4142-a2f5-d094c71ee22a\" (UID: \"112a29db-e714-4142-a2f5-d094c71ee22a\") " Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.395255 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112a29db-e714-4142-a2f5-d094c71ee22a-kube-api-access-q7lpj" (OuterVolumeSpecName: "kube-api-access-q7lpj") pod "112a29db-e714-4142-a2f5-d094c71ee22a" (UID: "112a29db-e714-4142-a2f5-d094c71ee22a"). InnerVolumeSpecName "kube-api-access-q7lpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.459880 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7lpj\" (UniqueName: \"kubernetes.io/projected/112a29db-e714-4142-a2f5-d094c71ee22a-kube-api-access-q7lpj\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.549918 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "112a29db-e714-4142-a2f5-d094c71ee22a" (UID: "112a29db-e714-4142-a2f5-d094c71ee22a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.551468 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-config" (OuterVolumeSpecName: "config") pod "112a29db-e714-4142-a2f5-d094c71ee22a" (UID: "112a29db-e714-4142-a2f5-d094c71ee22a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.554929 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.554969 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.562442 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.562470 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.566304 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "112a29db-e714-4142-a2f5-d094c71ee22a" (UID: "112a29db-e714-4142-a2f5-d094c71ee22a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.568575 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "112a29db-e714-4142-a2f5-d094c71ee22a" (UID: "112a29db-e714-4142-a2f5-d094c71ee22a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.593237 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.594714 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "112a29db-e714-4142-a2f5-d094c71ee22a" (UID: "112a29db-e714-4142-a2f5-d094c71ee22a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.603578 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.615053 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.620987 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.668369 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.668402 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.668421 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112a29db-e714-4142-a2f5-d094c71ee22a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.698416 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b778d8db6-87lbq"] Jan 26 14:29:43 crc kubenswrapper[4922]: E0126 14:29:43.756539 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" Jan 26 14:29:43 crc kubenswrapper[4922]: I0126 14:29:43.783367 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75bc76c88b-b6znr"] Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.275849 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerStarted","Data":"2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855"} Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.280435 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b778d8db6-87lbq" event={"ID":"cd13bb86-5407-4a3b-b563-469791214577","Type":"ContainerStarted","Data":"3db85e46a1cb5f03146715613ce0a5bcaeb213711ea2857d779e9e89608fddd9"} Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.282515 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4483d7ac-397e-4220-82f3-c6832fe69c2e","Type":"ContainerStarted","Data":"65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e"} Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.282707 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="ceilometer-notification-agent" containerID="cri-o://ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345" gracePeriod=30 Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.282808 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.282891 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="proxy-httpd" containerID="cri-o://65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e" gracePeriod=30 Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.287910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bc76c88b-b6znr" event={"ID":"5ed1bf50-3aec-40cc-843f-afe6a0b2027d","Type":"ContainerStarted","Data":"4ef25f363d33c389a1924dc8b0fe89ab7148838a60b212352d533e01a51aa95e"} Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.289131 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerID="0a82e0b3537a1b01ba3d756d702288dba9ecae822fbb9b79e29b21531c9d62de" exitCode=0 Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.290628 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" event={"ID":"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa","Type":"ContainerDied","Data":"0a82e0b3537a1b01ba3d756d702288dba9ecae822fbb9b79e29b21531c9d62de"} Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.290681 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687d55ffd9-pqcz2" Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.402354 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.436648 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.451774 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-687d55ffd9-pqcz2"] Jan 26 14:29:44 crc kubenswrapper[4922]: I0126 14:29:44.472557 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-687d55ffd9-pqcz2"] Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.109781 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" path="/var/lib/kubelet/pods/112a29db-e714-4142-a2f5-d094c71ee22a/volumes" Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.304255 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bff9847b7-bs5nf" event={"ID":"857d24cb-db5e-45ef-9b8a-025ee81b0083","Type":"ContainerStarted","Data":"cfeacee43199d7c34452af044a6706f160649a9fec433c961225d6715bdb94a0"} Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.307866 4922 generic.go:334] "Generic (PLEG): container finished" podID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerID="65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e" exitCode=0 Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.307952 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4483d7ac-397e-4220-82f3-c6832fe69c2e","Type":"ContainerDied","Data":"65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e"} Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.310124 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bc76c88b-b6znr" event={"ID":"5ed1bf50-3aec-40cc-843f-afe6a0b2027d","Type":"ContainerStarted","Data":"fc262e1c734d5928760f2bcb458a041258c1dbea3126480c5376e9cbc3234d85"} Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.313276 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" event={"ID":"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa","Type":"ContainerStarted","Data":"b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132"} Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.313367 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.315234 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b778d8db6-87lbq" event={"ID":"cd13bb86-5407-4a3b-b563-469791214577","Type":"ContainerStarted","Data":"1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f"} Jan 26 14:29:45 crc kubenswrapper[4922]: I0126 14:29:45.316990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" event={"ID":"94ec145d-ce40-473f-8598-dbf02d89cc44","Type":"ContainerStarted","Data":"5616593404af4a0a48d37c5b4e83a1ed2ef22bf7a12ff7f461b1a5d01b9b7593"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.120927 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.142508 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" podStartSLOduration=9.14248877 podStartE2EDuration="9.14248877s" podCreationTimestamp="2026-01-26 14:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:45.333162919 +0000 UTC m=+1202.535425711" watchObservedRunningTime="2026-01-26 14:29:46.14248877 +0000 UTC m=+1203.344751542" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.224871 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-log-httpd\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.225013 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntrs9\" (UniqueName: \"kubernetes.io/projected/4483d7ac-397e-4220-82f3-c6832fe69c2e-kube-api-access-ntrs9\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.225041 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-run-httpd\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.225077 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-sg-core-conf-yaml\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.225128 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-scripts\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.225174 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-config-data\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.225192 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-combined-ca-bundle\") pod \"4483d7ac-397e-4220-82f3-c6832fe69c2e\" (UID: \"4483d7ac-397e-4220-82f3-c6832fe69c2e\") " Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.226469 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.226838 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.231784 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4483d7ac-397e-4220-82f3-c6832fe69c2e-kube-api-access-ntrs9" (OuterVolumeSpecName: "kube-api-access-ntrs9") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "kube-api-access-ntrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.232010 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-scripts" (OuterVolumeSpecName: "scripts") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.248323 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.276421 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.309107 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-config-data" (OuterVolumeSpecName: "config-data") pod "4483d7ac-397e-4220-82f3-c6832fe69c2e" (UID: "4483d7ac-397e-4220-82f3-c6832fe69c2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327260 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntrs9\" (UniqueName: \"kubernetes.io/projected/4483d7ac-397e-4220-82f3-c6832fe69c2e-kube-api-access-ntrs9\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327576 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327665 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327748 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327832 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327909 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4483d7ac-397e-4220-82f3-c6832fe69c2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.327990 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4483d7ac-397e-4220-82f3-c6832fe69c2e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.328234 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b778d8db6-87lbq" event={"ID":"cd13bb86-5407-4a3b-b563-469791214577","Type":"ContainerStarted","Data":"cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.328366 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.328467 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.331565 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" event={"ID":"94ec145d-ce40-473f-8598-dbf02d89cc44","Type":"ContainerStarted","Data":"d0bc24dc504c78cc4b10c3a36cde80bdb007706ed52ffb6ca7d68d56f111f97e"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.335206 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bff9847b7-bs5nf" event={"ID":"857d24cb-db5e-45ef-9b8a-025ee81b0083","Type":"ContainerStarted","Data":"fbdca9b25bc6df77c1e2a5637476bc84734be38942c33f11fe52a9d4637cfa00"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.344161 4922 generic.go:334] "Generic (PLEG): container finished" podID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerID="ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345" exitCode=0 Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.344455 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4483d7ac-397e-4220-82f3-c6832fe69c2e","Type":"ContainerDied","Data":"ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.344495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4483d7ac-397e-4220-82f3-c6832fe69c2e","Type":"ContainerDied","Data":"793457ac10eebc77eeeb9d8cc72bf66d8ee04144762a5123d9fddd64ee661e42"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.344561 4922 scope.go:117] "RemoveContainer" containerID="65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.344889 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.355984 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bc76c88b-b6znr" event={"ID":"5ed1bf50-3aec-40cc-843f-afe6a0b2027d","Type":"ContainerStarted","Data":"9694e91074ee1240e8083745cdcc16f21292e34fd57a461672db1c97a6803308"} Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.356263 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.356366 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.371837 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b778d8db6-87lbq" podStartSLOduration=9.371813987 podStartE2EDuration="9.371813987s" podCreationTimestamp="2026-01-26 14:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:46.350509438 +0000 UTC m=+1203.552772230" watchObservedRunningTime="2026-01-26 14:29:46.371813987 +0000 UTC m=+1203.574076759" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.384645 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7f88dc4bbb-qt8fk" podStartSLOduration=7.161218537 podStartE2EDuration="9.384568587s" podCreationTimestamp="2026-01-26 14:29:37 +0000 UTC" firstStartedPulling="2026-01-26 14:29:42.601489154 +0000 UTC m=+1199.803751926" lastFinishedPulling="2026-01-26 14:29:44.824839204 +0000 UTC m=+1202.027101976" observedRunningTime="2026-01-26 14:29:46.368684409 +0000 UTC m=+1203.570947181" watchObservedRunningTime="2026-01-26 14:29:46.384568587 +0000 UTC m=+1203.586831359" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.398784 4922 scope.go:117] "RemoveContainer" containerID="ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.423186 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6bff9847b7-bs5nf" podStartSLOduration=7.216693489 podStartE2EDuration="9.423162784s" podCreationTimestamp="2026-01-26 14:29:37 +0000 UTC" firstStartedPulling="2026-01-26 14:29:42.596356899 +0000 UTC m=+1199.798619671" lastFinishedPulling="2026-01-26 14:29:44.802826194 +0000 UTC m=+1202.005088966" observedRunningTime="2026-01-26 14:29:46.39072534 +0000 UTC m=+1203.592988112" watchObservedRunningTime="2026-01-26 14:29:46.423162784 +0000 UTC m=+1203.625425556" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.440723 4922 scope.go:117] "RemoveContainer" containerID="65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.443091 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75bc76c88b-b6znr" podStartSLOduration=6.443051874 podStartE2EDuration="6.443051874s" podCreationTimestamp="2026-01-26 14:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:46.412998267 +0000 UTC m=+1203.615261039" watchObservedRunningTime="2026-01-26 14:29:46.443051874 +0000 UTC m=+1203.645314646" Jan 26 14:29:46 crc kubenswrapper[4922]: E0126 14:29:46.444001 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e\": container with ID starting with 65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e not found: ID does not exist" containerID="65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.444041 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e"} err="failed to get container status \"65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e\": rpc error: code = NotFound desc = could not find container \"65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e\": container with ID starting with 65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e not found: ID does not exist" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.444073 4922 scope.go:117] "RemoveContainer" containerID="ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345" Jan 26 14:29:46 crc kubenswrapper[4922]: E0126 14:29:46.444934 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345\": container with ID starting with ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345 not found: ID does not exist" containerID="ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.445015 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345"} err="failed to get container status \"ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345\": rpc error: code = NotFound desc = could not find container \"ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345\": container with ID starting with ea40a0b3123e81d32c847bd9524cdda9a6ac8c5df8db75d5b02dc1dabba06345 not found: ID does not exist" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.508104 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.521160 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530221 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:29:46 crc kubenswrapper[4922]: E0126 14:29:46.530671 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" containerName="dnsmasq-dns" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530691 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" containerName="dnsmasq-dns" Jan 26 14:29:46 crc kubenswrapper[4922]: E0126 14:29:46.530739 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" containerName="init" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530746 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" containerName="init" Jan 26 14:29:46 crc kubenswrapper[4922]: E0126 14:29:46.530756 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="ceilometer-notification-agent" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530762 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="ceilometer-notification-agent" Jan 26 14:29:46 crc kubenswrapper[4922]: E0126 14:29:46.530772 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="proxy-httpd" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530780 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="proxy-httpd" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530961 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="ceilometer-notification-agent" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.530984 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" containerName="proxy-httpd" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.531005 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="112a29db-e714-4142-a2f5-d094c71ee22a" containerName="dnsmasq-dns" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.532548 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.536091 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.539794 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.550958 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.637949 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-log-httpd\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.638241 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-config-data\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.638607 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.638689 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-run-httpd\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.638726 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2ks\" (UniqueName: \"kubernetes.io/projected/a7d63894-e178-44b9-9b6a-93b98cb78b8a-kube-api-access-8d2ks\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.638816 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.638865 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-scripts\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.672003 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b4f749b44-2qdw7" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.740925 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-config-data\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.740985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.741011 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-run-httpd\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.741027 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d2ks\" (UniqueName: \"kubernetes.io/projected/a7d63894-e178-44b9-9b6a-93b98cb78b8a-kube-api-access-8d2ks\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.741058 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.741105 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-scripts\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.741143 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-log-httpd\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.741520 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-log-httpd\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.742839 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-run-httpd\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.746795 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-scripts\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.747614 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-config-data\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.749192 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.753434 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.764282 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d2ks\" (UniqueName: \"kubernetes.io/projected/a7d63894-e178-44b9-9b6a-93b98cb78b8a-kube-api-access-8d2ks\") pod \"ceilometer-0\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " pod="openstack/ceilometer-0" Jan 26 14:29:46 crc kubenswrapper[4922]: I0126 14:29:46.860837 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:29:47 crc kubenswrapper[4922]: I0126 14:29:47.105211 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4483d7ac-397e-4220-82f3-c6832fe69c2e" path="/var/lib/kubelet/pods/4483d7ac-397e-4220-82f3-c6832fe69c2e/volumes" Jan 26 14:29:47 crc kubenswrapper[4922]: I0126 14:29:47.332127 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:29:47 crc kubenswrapper[4922]: I0126 14:29:47.376635 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerStarted","Data":"ffe1f4b9722c577561d773a8ecb797c6769fac582cecb6a10214e1c91428b236"} Jan 26 14:29:48 crc kubenswrapper[4922]: I0126 14:29:48.402297 4922 generic.go:334] "Generic (PLEG): container finished" podID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerID="2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855" exitCode=1 Jan 26 14:29:48 crc kubenswrapper[4922]: I0126 14:29:48.402529 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerDied","Data":"2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855"} Jan 26 14:29:48 crc kubenswrapper[4922]: I0126 14:29:48.402958 4922 scope.go:117] "RemoveContainer" containerID="350f4d05df242525414822dc717fc758cb1f5d15b98114d8bf47543dca68a9ea" Jan 26 14:29:48 crc kubenswrapper[4922]: I0126 14:29:48.403731 4922 scope.go:117] "RemoveContainer" containerID="2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855" Jan 26 14:29:48 crc kubenswrapper[4922]: E0126 14:29:48.404108 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(1678095e-0a1d-4199-90c6-ea3afc879e0b)\"" pod="openstack/watcher-decision-engine-0" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" Jan 26 14:29:48 crc kubenswrapper[4922]: I0126 14:29:48.414039 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerStarted","Data":"c41f9da2244b62a7af0e87092700061ee879552a2c776f56525485277d89e7eb"} Jan 26 14:29:49 crc kubenswrapper[4922]: I0126 14:29:49.425237 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerStarted","Data":"02ad6f1e470e9ba0ddd7cff64a43f0aa82d74e584c7ad392e59de485356f11eb"} Jan 26 14:29:49 crc kubenswrapper[4922]: I0126 14:29:49.425580 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerStarted","Data":"2634ba48a15e7efc7cd0486fa403f3f5eabe101dd2668fd5e3a4201bd352d1cc"} Jan 26 14:29:49 crc kubenswrapper[4922]: I0126 14:29:49.426597 4922 generic.go:334] "Generic (PLEG): container finished" podID="91754680-73d8-4c72-a7bd-834959e192a1" containerID="fd62fad13f01dcbbf639bc9ff71f574f92522f976a084b2bb925a733ab522f31" exitCode=0 Jan 26 14:29:49 crc kubenswrapper[4922]: I0126 14:29:49.426631 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvk6w" event={"ID":"91754680-73d8-4c72-a7bd-834959e192a1","Type":"ContainerDied","Data":"fd62fad13f01dcbbf639bc9ff71f574f92522f976a084b2bb925a733ab522f31"} Jan 26 14:29:50 crc kubenswrapper[4922]: I0126 14:29:50.881635 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.040917 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-scripts\") pod \"91754680-73d8-4c72-a7bd-834959e192a1\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.041246 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-config-data\") pod \"91754680-73d8-4c72-a7bd-834959e192a1\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.041323 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91754680-73d8-4c72-a7bd-834959e192a1-etc-machine-id\") pod \"91754680-73d8-4c72-a7bd-834959e192a1\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.041373 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-db-sync-config-data\") pod \"91754680-73d8-4c72-a7bd-834959e192a1\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.041462 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x68hz\" (UniqueName: \"kubernetes.io/projected/91754680-73d8-4c72-a7bd-834959e192a1-kube-api-access-x68hz\") pod \"91754680-73d8-4c72-a7bd-834959e192a1\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.041521 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-combined-ca-bundle\") pod \"91754680-73d8-4c72-a7bd-834959e192a1\" (UID: \"91754680-73d8-4c72-a7bd-834959e192a1\") " Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.042661 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91754680-73d8-4c72-a7bd-834959e192a1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "91754680-73d8-4c72-a7bd-834959e192a1" (UID: "91754680-73d8-4c72-a7bd-834959e192a1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.047028 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-scripts" (OuterVolumeSpecName: "scripts") pod "91754680-73d8-4c72-a7bd-834959e192a1" (UID: "91754680-73d8-4c72-a7bd-834959e192a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.054986 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91754680-73d8-4c72-a7bd-834959e192a1-kube-api-access-x68hz" (OuterVolumeSpecName: "kube-api-access-x68hz") pod "91754680-73d8-4c72-a7bd-834959e192a1" (UID: "91754680-73d8-4c72-a7bd-834959e192a1"). InnerVolumeSpecName "kube-api-access-x68hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.058183 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "91754680-73d8-4c72-a7bd-834959e192a1" (UID: "91754680-73d8-4c72-a7bd-834959e192a1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.086733 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91754680-73d8-4c72-a7bd-834959e192a1" (UID: "91754680-73d8-4c72-a7bd-834959e192a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.127027 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-config-data" (OuterVolumeSpecName: "config-data") pod "91754680-73d8-4c72-a7bd-834959e192a1" (UID: "91754680-73d8-4c72-a7bd-834959e192a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.143915 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.143949 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.143958 4922 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91754680-73d8-4c72-a7bd-834959e192a1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.143967 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.143976 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x68hz\" (UniqueName: \"kubernetes.io/projected/91754680-73d8-4c72-a7bd-834959e192a1-kube-api-access-x68hz\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.143985 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91754680-73d8-4c72-a7bd-834959e192a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.446487 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerStarted","Data":"35369f0eb29926fa78b9e79c13da3ae421e5178af2b296a875ec4efa64f9e6be"} Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.448162 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.449812 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvk6w" event={"ID":"91754680-73d8-4c72-a7bd-834959e192a1","Type":"ContainerDied","Data":"402249eb3c7eb85afb7051d3325eab7b11c1b74773ee1226af39e96a94176e51"} Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.449861 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402249eb3c7eb85afb7051d3325eab7b11c1b74773ee1226af39e96a94176e51" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.449921 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvk6w" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.482748 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.382741277 podStartE2EDuration="5.482718304s" podCreationTimestamp="2026-01-26 14:29:46 +0000 UTC" firstStartedPulling="2026-01-26 14:29:47.345700823 +0000 UTC m=+1204.547963595" lastFinishedPulling="2026-01-26 14:29:50.44567781 +0000 UTC m=+1207.647940622" observedRunningTime="2026-01-26 14:29:51.470618083 +0000 UTC m=+1208.672880865" watchObservedRunningTime="2026-01-26 14:29:51.482718304 +0000 UTC m=+1208.684981116" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.840422 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:29:51 crc kubenswrapper[4922]: E0126 14:29:51.841127 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91754680-73d8-4c72-a7bd-834959e192a1" containerName="cinder-db-sync" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.841174 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="91754680-73d8-4c72-a7bd-834959e192a1" containerName="cinder-db-sync" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.842639 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="91754680-73d8-4c72-a7bd-834959e192a1" containerName="cinder-db-sync" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.843917 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.846612 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.846708 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.848113 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sqh6v" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.848349 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.869914 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.931347 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74ddf65b4f-8lxm8"] Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.931577 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerName="dnsmasq-dns" containerID="cri-o://b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132" gracePeriod=10 Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.940449 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.968979 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f89675f9f-cmxr6"] Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.970561 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.980768 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896bq\" (UniqueName: \"kubernetes.io/projected/b68caf8c-1863-4437-8ed2-5123d9a14db8-kube-api-access-896bq\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.980913 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-scripts\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.981012 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.981053 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.981236 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b68caf8c-1863-4437-8ed2-5123d9a14db8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.981310 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:51 crc kubenswrapper[4922]: I0126 14:29:51.992051 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f89675f9f-cmxr6"] Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.082031 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.083467 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084499 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084564 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-svc\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084596 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-896bq\" (UniqueName: \"kubernetes.io/projected/b68caf8c-1863-4437-8ed2-5123d9a14db8-kube-api-access-896bq\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084622 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9s2n\" (UniqueName: \"kubernetes.io/projected/22670ba0-ab65-49a7-b3f0-928800e10ca1-kube-api-access-k9s2n\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084643 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-config\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084688 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-scripts\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084729 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084751 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084775 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-nb\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084792 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-swift-storage-0\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084843 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-sb\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084861 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b68caf8c-1863-4437-8ed2-5123d9a14db8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.084939 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b68caf8c-1863-4437-8ed2-5123d9a14db8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.087752 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.094152 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.095675 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.098966 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.119588 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.125463 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-scripts\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.125681 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-896bq\" (UniqueName: \"kubernetes.io/projected/b68caf8c-1863-4437-8ed2-5123d9a14db8-kube-api-access-896bq\") pod \"cinder-scheduler-0\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.187498 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.188884 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.188935 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-svc\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.188986 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9s2n\" (UniqueName: \"kubernetes.io/projected/22670ba0-ab65-49a7-b3f0-928800e10ca1-kube-api-access-k9s2n\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189006 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-config\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189034 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51136c50-be90-4461-a3a1-c68bfb6af203-etc-machine-id\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189073 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189100 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51136c50-be90-4461-a3a1-c68bfb6af203-logs\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189135 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrb5\" (UniqueName: \"kubernetes.io/projected/51136c50-be90-4461-a3a1-c68bfb6af203-kube-api-access-7hrb5\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189151 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data-custom\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-nb\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189190 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-swift-storage-0\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189220 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-scripts\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.189268 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-sb\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.190561 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-svc\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.191285 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-config\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.192943 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-nb\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.193166 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-sb\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.194861 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-swift-storage-0\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.209998 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9s2n\" (UniqueName: \"kubernetes.io/projected/22670ba0-ab65-49a7-b3f0-928800e10ca1-kube-api-access-k9s2n\") pod \"dnsmasq-dns-7f89675f9f-cmxr6\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.291487 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.291585 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51136c50-be90-4461-a3a1-c68bfb6af203-etc-machine-id\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.295085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51136c50-be90-4461-a3a1-c68bfb6af203-etc-machine-id\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.297678 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.297755 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51136c50-be90-4461-a3a1-c68bfb6af203-logs\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.297853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hrb5\" (UniqueName: \"kubernetes.io/projected/51136c50-be90-4461-a3a1-c68bfb6af203-kube-api-access-7hrb5\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.297875 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data-custom\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.297978 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-scripts\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.298603 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51136c50-be90-4461-a3a1-c68bfb6af203-logs\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.299112 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.299191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.302427 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-scripts\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.303651 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.322856 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data-custom\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.340298 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hrb5\" (UniqueName: \"kubernetes.io/projected/51136c50-be90-4461-a3a1-c68bfb6af203-kube-api-access-7hrb5\") pod \"cinder-api-0\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.373984 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.490884 4922 generic.go:334] "Generic (PLEG): container finished" podID="281c4d86-0cfa-4637-9106-2099e20add9a" containerID="392dfe958b3ed5aee9c1d4a1b60e37539a9c063ac7641d6f76d3547f769fedc1" exitCode=0 Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.491164 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44cnx" event={"ID":"281c4d86-0cfa-4637-9106-2099e20add9a","Type":"ContainerDied","Data":"392dfe958b3ed5aee9c1d4a1b60e37539a9c063ac7641d6f76d3547f769fedc1"} Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.510041 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerID="b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132" exitCode=0 Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.510396 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" event={"ID":"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa","Type":"ContainerDied","Data":"b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132"} Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.596650 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.704440 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrkth\" (UniqueName: \"kubernetes.io/projected/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-kube-api-access-mrkth\") pod \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.704561 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-swift-storage-0\") pod \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.704596 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-nb\") pod \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.704638 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-config\") pod \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.704759 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-sb\") pod \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.704798 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-svc\") pod \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\" (UID: \"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa\") " Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.747297 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-kube-api-access-mrkth" (OuterVolumeSpecName: "kube-api-access-mrkth") pod "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" (UID: "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa"). InnerVolumeSpecName "kube-api-access-mrkth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.809341 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrkth\" (UniqueName: \"kubernetes.io/projected/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-kube-api-access-mrkth\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.901208 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" (UID: "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.915466 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.917783 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.938663 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-config" (OuterVolumeSpecName: "config") pod "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" (UID: "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.954457 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" (UID: "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.955047 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" (UID: "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:52 crc kubenswrapper[4922]: I0126 14:29:52.983571 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" (UID: "f4798bf8-f62b-4f88-8a42-4f33ee79eeaa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.019168 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.019198 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.019210 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.019218 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.112525 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.141182 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75bc76c88b-b6znr" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.212992 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f89675f9f-cmxr6"] Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.242615 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b778d8db6-87lbq"] Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.242855 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" containerID="cri-o://1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f" gracePeriod=30 Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.242963 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" containerID="cri-o://cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9" gracePeriod=30 Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.253276 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.253329 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.253276 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.253653 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.269690 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.529820 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b68caf8c-1863-4437-8ed2-5123d9a14db8","Type":"ContainerStarted","Data":"3c9a4091b0a911171b52cef765688fcd91f4d0218953001771b7d6b1e73f4c2c"} Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.536264 4922 generic.go:334] "Generic (PLEG): container finished" podID="cd13bb86-5407-4a3b-b563-469791214577" containerID="1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f" exitCode=143 Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.536341 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b778d8db6-87lbq" event={"ID":"cd13bb86-5407-4a3b-b563-469791214577","Type":"ContainerDied","Data":"1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f"} Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.541081 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" event={"ID":"f4798bf8-f62b-4f88-8a42-4f33ee79eeaa","Type":"ContainerDied","Data":"2a030e9015fc6d4fe69a23fa0a1f2db1457e860ec8d03c5f39d809c26c032ef0"} Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.541138 4922 scope.go:117] "RemoveContainer" containerID="b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.541336 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74ddf65b4f-8lxm8" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.556856 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.556896 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.557529 4922 scope.go:117] "RemoveContainer" containerID="2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855" Jan 26 14:29:53 crc kubenswrapper[4922]: E0126 14:29:53.557739 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(1678095e-0a1d-4199-90c6-ea3afc879e0b)\"" pod="openstack/watcher-decision-engine-0" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.570490 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74ddf65b4f-8lxm8"] Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.581923 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74ddf65b4f-8lxm8"] Jan 26 14:29:53 crc kubenswrapper[4922]: I0126 14:29:53.653949 4922 scope.go:117] "RemoveContainer" containerID="0a82e0b3537a1b01ba3d756d702288dba9ecae822fbb9b79e29b21531c9d62de" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.167204 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.427145 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44cnx" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.560515 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdqx9\" (UniqueName: \"kubernetes.io/projected/281c4d86-0cfa-4637-9106-2099e20add9a-kube-api-access-kdqx9\") pod \"281c4d86-0cfa-4637-9106-2099e20add9a\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.560734 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-combined-ca-bundle\") pod \"281c4d86-0cfa-4637-9106-2099e20add9a\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.560778 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-config\") pod \"281c4d86-0cfa-4637-9106-2099e20add9a\" (UID: \"281c4d86-0cfa-4637-9106-2099e20add9a\") " Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.570238 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281c4d86-0cfa-4637-9106-2099e20add9a-kube-api-access-kdqx9" (OuterVolumeSpecName: "kube-api-access-kdqx9") pod "281c4d86-0cfa-4637-9106-2099e20add9a" (UID: "281c4d86-0cfa-4637-9106-2099e20add9a"). InnerVolumeSpecName "kube-api-access-kdqx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.612185 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"51136c50-be90-4461-a3a1-c68bfb6af203","Type":"ContainerStarted","Data":"1e88bc838a9deff38104b8c1ab3b5985f381d22b7922834216df75445529692b"} Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.640322 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44cnx" event={"ID":"281c4d86-0cfa-4637-9106-2099e20add9a","Type":"ContainerDied","Data":"f24353b9ab22cba890a4dfa5dbf770f418d3c94bd461190048dd7bffa1c65c2d"} Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.640358 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f24353b9ab22cba890a4dfa5dbf770f418d3c94bd461190048dd7bffa1c65c2d" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.640415 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44cnx" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.662281 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-config" (OuterVolumeSpecName: "config") pod "281c4d86-0cfa-4637-9106-2099e20add9a" (UID: "281c4d86-0cfa-4637-9106-2099e20add9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.662721 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" event={"ID":"22670ba0-ab65-49a7-b3f0-928800e10ca1","Type":"ContainerStarted","Data":"8c905e08976212887673b48a794f42006554ec5f5cbf19b61909246794941ad6"} Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.668208 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "281c4d86-0cfa-4637-9106-2099e20add9a" (UID: "281c4d86-0cfa-4637-9106-2099e20add9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.673297 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.673328 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdqx9\" (UniqueName: \"kubernetes.io/projected/281c4d86-0cfa-4637-9106-2099e20add9a-kube-api-access-kdqx9\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.673339 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c4d86-0cfa-4637-9106-2099e20add9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.815198 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f89675f9f-cmxr6"] Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.850858 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbb4847c5-48r4w"] Jan 26 14:29:54 crc kubenswrapper[4922]: E0126 14:29:54.851340 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281c4d86-0cfa-4637-9106-2099e20add9a" containerName="neutron-db-sync" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.851354 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="281c4d86-0cfa-4637-9106-2099e20add9a" containerName="neutron-db-sync" Jan 26 14:29:54 crc kubenswrapper[4922]: E0126 14:29:54.851381 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerName="init" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.851388 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerName="init" Jan 26 14:29:54 crc kubenswrapper[4922]: E0126 14:29:54.851397 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerName="dnsmasq-dns" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.851403 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerName="dnsmasq-dns" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.851593 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" containerName="dnsmasq-dns" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.851607 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="281c4d86-0cfa-4637-9106-2099e20add9a" containerName="neutron-db-sync" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.852721 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.864130 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbb4847c5-48r4w"] Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.918137 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d5bfcf8c6-kc4k2"] Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.919701 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.923304 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.943389 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d5bfcf8c6-kc4k2"] Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983036 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-httpd-config\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983180 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-ovndb-tls-certs\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983209 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983285 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-combined-ca-bundle\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983302 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983343 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-config\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983362 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.983436 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwxl\" (UniqueName: \"kubernetes.io/projected/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-kube-api-access-lxwxl\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.984195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-kube-api-access-566nw\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.984270 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-config\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:54 crc kubenswrapper[4922]: I0126 14:29:54.984298 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-svc\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-ovndb-tls-certs\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086335 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086386 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-combined-ca-bundle\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086401 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086425 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-config\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086444 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwxl\" (UniqueName: \"kubernetes.io/projected/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-kube-api-access-lxwxl\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086517 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-kube-api-access-566nw\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086554 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-config\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086573 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-svc\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.086601 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-httpd-config\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.087826 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.087830 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.087843 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-config\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.088835 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.088993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-svc\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.090389 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.107959 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-config\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.108598 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-httpd-config\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.108934 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-ovndb-tls-certs\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.114094 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwxl\" (UniqueName: \"kubernetes.io/projected/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-kube-api-access-lxwxl\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.116160 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4798bf8-f62b-4f88-8a42-4f33ee79eeaa" path="/var/lib/kubelet/pods/f4798bf8-f62b-4f88-8a42-4f33ee79eeaa/volumes" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.117526 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-kube-api-access-566nw\") pod \"dnsmasq-dns-5fbb4847c5-48r4w\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.119637 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-combined-ca-bundle\") pod \"neutron-d5bfcf8c6-kc4k2\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.199492 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.269544 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.730138 4922 generic.go:334] "Generic (PLEG): container finished" podID="22670ba0-ab65-49a7-b3f0-928800e10ca1" containerID="dc8c8f524c309be6f485c7a7e642260365e46ffdc175eb5bb3391940bcc3af61" exitCode=0 Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.730303 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" event={"ID":"22670ba0-ab65-49a7-b3f0-928800e10ca1","Type":"ContainerDied","Data":"dc8c8f524c309be6f485c7a7e642260365e46ffdc175eb5bb3391940bcc3af61"} Jan 26 14:29:55 crc kubenswrapper[4922]: I0126 14:29:55.903655 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbb4847c5-48r4w"] Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.340723 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.417417 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-swift-storage-0\") pod \"22670ba0-ab65-49a7-b3f0-928800e10ca1\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.417487 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-sb\") pod \"22670ba0-ab65-49a7-b3f0-928800e10ca1\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.417524 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-config\") pod \"22670ba0-ab65-49a7-b3f0-928800e10ca1\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.417544 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-svc\") pod \"22670ba0-ab65-49a7-b3f0-928800e10ca1\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.417609 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-nb\") pod \"22670ba0-ab65-49a7-b3f0-928800e10ca1\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.417732 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9s2n\" (UniqueName: \"kubernetes.io/projected/22670ba0-ab65-49a7-b3f0-928800e10ca1-kube-api-access-k9s2n\") pod \"22670ba0-ab65-49a7-b3f0-928800e10ca1\" (UID: \"22670ba0-ab65-49a7-b3f0-928800e10ca1\") " Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.438224 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22670ba0-ab65-49a7-b3f0-928800e10ca1-kube-api-access-k9s2n" (OuterVolumeSpecName: "kube-api-access-k9s2n") pod "22670ba0-ab65-49a7-b3f0-928800e10ca1" (UID: "22670ba0-ab65-49a7-b3f0-928800e10ca1"). InnerVolumeSpecName "kube-api-access-k9s2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.445435 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22670ba0-ab65-49a7-b3f0-928800e10ca1" (UID: "22670ba0-ab65-49a7-b3f0-928800e10ca1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.453553 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22670ba0-ab65-49a7-b3f0-928800e10ca1" (UID: "22670ba0-ab65-49a7-b3f0-928800e10ca1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.455675 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-config" (OuterVolumeSpecName: "config") pod "22670ba0-ab65-49a7-b3f0-928800e10ca1" (UID: "22670ba0-ab65-49a7-b3f0-928800e10ca1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.470927 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22670ba0-ab65-49a7-b3f0-928800e10ca1" (UID: "22670ba0-ab65-49a7-b3f0-928800e10ca1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.476705 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22670ba0-ab65-49a7-b3f0-928800e10ca1" (UID: "22670ba0-ab65-49a7-b3f0-928800e10ca1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.519663 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.519694 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.519704 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.519712 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.519720 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22670ba0-ab65-49a7-b3f0-928800e10ca1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.519730 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9s2n\" (UniqueName: \"kubernetes.io/projected/22670ba0-ab65-49a7-b3f0-928800e10ca1-kube-api-access-k9s2n\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.672453 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b4f749b44-2qdw7" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.680543 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d5bfcf8c6-kc4k2"] Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.788333 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b68caf8c-1863-4437-8ed2-5123d9a14db8","Type":"ContainerStarted","Data":"57b5d0028587b97dcdb8dac99dfbc147ac07958eac9dee99787e3c843fec14b1"} Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.806021 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" event={"ID":"22670ba0-ab65-49a7-b3f0-928800e10ca1","Type":"ContainerDied","Data":"8c905e08976212887673b48a794f42006554ec5f5cbf19b61909246794941ad6"} Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.807126 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f89675f9f-cmxr6" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.807399 4922 scope.go:117] "RemoveContainer" containerID="dc8c8f524c309be6f485c7a7e642260365e46ffdc175eb5bb3391940bcc3af61" Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.825514 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5bfcf8c6-kc4k2" event={"ID":"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5","Type":"ContainerStarted","Data":"4365bf2aa6fa42306643ab8bb8ccb29f00bcc611b5756fac424abb647cb17548"} Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.873299 4922 generic.go:334] "Generic (PLEG): container finished" podID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerID="b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb" exitCode=0 Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.873386 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" event={"ID":"36b36d7c-c277-4c60-9408-3a2bf41cfc7d","Type":"ContainerDied","Data":"b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb"} Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.873412 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" event={"ID":"36b36d7c-c277-4c60-9408-3a2bf41cfc7d","Type":"ContainerStarted","Data":"c5b0c5e6ad76192313cbe7098200c43f14f21446fd6a3d755ce06f3bcaca138d"} Jan 26 14:29:56 crc kubenswrapper[4922]: I0126 14:29:56.897523 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"51136c50-be90-4461-a3a1-c68bfb6af203","Type":"ContainerStarted","Data":"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.118935 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f89675f9f-cmxr6"] Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.141955 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f89675f9f-cmxr6"] Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.907528 4922 generic.go:334] "Generic (PLEG): container finished" podID="99c8b640-ac97-4a3e-8e4c-1781bd756396" containerID="a04ecedb0937b481c79df65e9f38d469fe0b6a913ee51d8b391e1e0fd2332851" exitCode=0 Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.907812 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt6mt" event={"ID":"99c8b640-ac97-4a3e-8e4c-1781bd756396","Type":"ContainerDied","Data":"a04ecedb0937b481c79df65e9f38d469fe0b6a913ee51d8b391e1e0fd2332851"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.910525 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5bfcf8c6-kc4k2" event={"ID":"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5","Type":"ContainerStarted","Data":"b07e43790eb63088b25b9cde071bc5ee5d645be310612a221d6a2c645956adb4"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.910559 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5bfcf8c6-kc4k2" event={"ID":"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5","Type":"ContainerStarted","Data":"4a44237cfd0d60ec2f3e1bea49a3103357095b7c8190adec69560d7f5da7ad22"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.911429 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.913315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" event={"ID":"36b36d7c-c277-4c60-9408-3a2bf41cfc7d","Type":"ContainerStarted","Data":"f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.913753 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.915425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"51136c50-be90-4461-a3a1-c68bfb6af203","Type":"ContainerStarted","Data":"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.915528 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api-log" containerID="cri-o://3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea" gracePeriod=30 Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.915720 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.915754 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api" containerID="cri-o://f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d" gracePeriod=30 Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.917562 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b68caf8c-1863-4437-8ed2-5123d9a14db8","Type":"ContainerStarted","Data":"14adc38e5ac5b91e7a65cffe8cb2083a5e7c101a67c3e452eca95cd5bf88d8aa"} Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.947955 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.947937618 podStartE2EDuration="5.947937618s" podCreationTimestamp="2026-01-26 14:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:57.936521487 +0000 UTC m=+1215.138784259" watchObservedRunningTime="2026-01-26 14:29:57.947937618 +0000 UTC m=+1215.150200390" Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.960527 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" podStartSLOduration=3.960505732 podStartE2EDuration="3.960505732s" podCreationTimestamp="2026-01-26 14:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:57.958378982 +0000 UTC m=+1215.160641754" watchObservedRunningTime="2026-01-26 14:29:57.960505732 +0000 UTC m=+1215.162768504" Jan 26 14:29:57 crc kubenswrapper[4922]: I0126 14:29:57.980923 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.502553234 podStartE2EDuration="6.980906886s" podCreationTimestamp="2026-01-26 14:29:51 +0000 UTC" firstStartedPulling="2026-01-26 14:29:52.942187943 +0000 UTC m=+1210.144450715" lastFinishedPulling="2026-01-26 14:29:54.420541595 +0000 UTC m=+1211.622804367" observedRunningTime="2026-01-26 14:29:57.978501289 +0000 UTC m=+1215.180764061" watchObservedRunningTime="2026-01-26 14:29:57.980906886 +0000 UTC m=+1215.183169658" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.141444 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": read tcp 10.217.0.2:59210->10.217.0.176:9311: read: connection reset by peer" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.141728 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": read tcp 10.217.0.2:59200->10.217.0.176:9311: read: connection reset by peer" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.141760 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": dial tcp 10.217.0.176:9311: connect: connection refused" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.143546 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b778d8db6-87lbq" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": dial tcp 10.217.0.176:9311: connect: connection refused" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.739292 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.744295 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.762543 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d5bfcf8c6-kc4k2" podStartSLOduration=4.762526528 podStartE2EDuration="4.762526528s" podCreationTimestamp="2026-01-26 14:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:29:58.001568739 +0000 UTC m=+1215.203831511" watchObservedRunningTime="2026-01-26 14:29:58.762526528 +0000 UTC m=+1215.964789300" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.875841 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-combined-ca-bundle\") pod \"cd13bb86-5407-4a3b-b563-469791214577\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.875916 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data-custom\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.875951 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876025 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvf6g\" (UniqueName: \"kubernetes.io/projected/cd13bb86-5407-4a3b-b563-469791214577-kube-api-access-lvf6g\") pod \"cd13bb86-5407-4a3b-b563-469791214577\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876055 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd13bb86-5407-4a3b-b563-469791214577-logs\") pod \"cd13bb86-5407-4a3b-b563-469791214577\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876101 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data\") pod \"cd13bb86-5407-4a3b-b563-469791214577\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876180 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-scripts\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876216 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-combined-ca-bundle\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876266 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51136c50-be90-4461-a3a1-c68bfb6af203-etc-machine-id\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876361 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51136c50-be90-4461-a3a1-c68bfb6af203-logs\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876403 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data-custom\") pod \"cd13bb86-5407-4a3b-b563-469791214577\" (UID: \"cd13bb86-5407-4a3b-b563-469791214577\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876451 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hrb5\" (UniqueName: \"kubernetes.io/projected/51136c50-be90-4461-a3a1-c68bfb6af203-kube-api-access-7hrb5\") pod \"51136c50-be90-4461-a3a1-c68bfb6af203\" (UID: \"51136c50-be90-4461-a3a1-c68bfb6af203\") " Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876531 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51136c50-be90-4461-a3a1-c68bfb6af203-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876822 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51136c50-be90-4461-a3a1-c68bfb6af203-logs" (OuterVolumeSpecName: "logs") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.876869 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd13bb86-5407-4a3b-b563-469791214577-logs" (OuterVolumeSpecName: "logs") pod "cd13bb86-5407-4a3b-b563-469791214577" (UID: "cd13bb86-5407-4a3b-b563-469791214577"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.885881 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd13bb86-5407-4a3b-b563-469791214577" (UID: "cd13bb86-5407-4a3b-b563-469791214577"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.885986 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.886246 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd13bb86-5407-4a3b-b563-469791214577-kube-api-access-lvf6g" (OuterVolumeSpecName: "kube-api-access-lvf6g") pod "cd13bb86-5407-4a3b-b563-469791214577" (UID: "cd13bb86-5407-4a3b-b563-469791214577"). InnerVolumeSpecName "kube-api-access-lvf6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.888247 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-scripts" (OuterVolumeSpecName: "scripts") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.888302 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51136c50-be90-4461-a3a1-c68bfb6af203-kube-api-access-7hrb5" (OuterVolumeSpecName: "kube-api-access-7hrb5") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "kube-api-access-7hrb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.927386 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.953152 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd13bb86-5407-4a3b-b563-469791214577" (UID: "cd13bb86-5407-4a3b-b563-469791214577"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.953206 4922 generic.go:334] "Generic (PLEG): container finished" podID="cd13bb86-5407-4a3b-b563-469791214577" containerID="cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9" exitCode=0 Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.953281 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b778d8db6-87lbq" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.953328 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b778d8db6-87lbq" event={"ID":"cd13bb86-5407-4a3b-b563-469791214577","Type":"ContainerDied","Data":"cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9"} Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.953385 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b778d8db6-87lbq" event={"ID":"cd13bb86-5407-4a3b-b563-469791214577","Type":"ContainerDied","Data":"3db85e46a1cb5f03146715613ce0a5bcaeb213711ea2857d779e9e89608fddd9"} Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.953405 4922 scope.go:117] "RemoveContainer" containerID="cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955216 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75cdcc7857-fs8tr"] Jan 26 14:29:58 crc kubenswrapper[4922]: E0126 14:29:58.955599 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22670ba0-ab65-49a7-b3f0-928800e10ca1" containerName="init" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955611 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="22670ba0-ab65-49a7-b3f0-928800e10ca1" containerName="init" Jan 26 14:29:58 crc kubenswrapper[4922]: E0126 14:29:58.955625 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955634 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" Jan 26 14:29:58 crc kubenswrapper[4922]: E0126 14:29:58.955646 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955652 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api" Jan 26 14:29:58 crc kubenswrapper[4922]: E0126 14:29:58.955665 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api-log" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955671 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api-log" Jan 26 14:29:58 crc kubenswrapper[4922]: E0126 14:29:58.955687 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955693 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955868 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955877 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api-log" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955896 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="22670ba0-ab65-49a7-b3f0-928800e10ca1" containerName="init" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.955910 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" containerName="cinder-api" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.956159 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd13bb86-5407-4a3b-b563-469791214577" containerName="barbican-api-log" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.957237 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.960399 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.960587 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.971688 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75cdcc7857-fs8tr"] Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981594 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981640 4922 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981654 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvf6g\" (UniqueName: \"kubernetes.io/projected/cd13bb86-5407-4a3b-b563-469791214577-kube-api-access-lvf6g\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981668 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd13bb86-5407-4a3b-b563-469791214577-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981679 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981688 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981700 4922 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51136c50-be90-4461-a3a1-c68bfb6af203-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981711 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51136c50-be90-4461-a3a1-c68bfb6af203-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981720 4922 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.981730 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hrb5\" (UniqueName: \"kubernetes.io/projected/51136c50-be90-4461-a3a1-c68bfb6af203-kube-api-access-7hrb5\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.985469 4922 generic.go:334] "Generic (PLEG): container finished" podID="51136c50-be90-4461-a3a1-c68bfb6af203" containerID="f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d" exitCode=0 Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.985504 4922 generic.go:334] "Generic (PLEG): container finished" podID="51136c50-be90-4461-a3a1-c68bfb6af203" containerID="3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea" exitCode=143 Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.985877 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.986321 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"51136c50-be90-4461-a3a1-c68bfb6af203","Type":"ContainerDied","Data":"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d"} Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.986367 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"51136c50-be90-4461-a3a1-c68bfb6af203","Type":"ContainerDied","Data":"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea"} Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.986380 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"51136c50-be90-4461-a3a1-c68bfb6af203","Type":"ContainerDied","Data":"1e88bc838a9deff38104b8c1ab3b5985f381d22b7922834216df75445529692b"} Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.992863 4922 scope.go:117] "RemoveContainer" containerID="1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f" Jan 26 14:29:58 crc kubenswrapper[4922]: I0126 14:29:58.998271 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data" (OuterVolumeSpecName: "config-data") pod "cd13bb86-5407-4a3b-b563-469791214577" (UID: "cd13bb86-5407-4a3b-b563-469791214577"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.022353 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data" (OuterVolumeSpecName: "config-data") pod "51136c50-be90-4461-a3a1-c68bfb6af203" (UID: "51136c50-be90-4461-a3a1-c68bfb6af203"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.024862 4922 scope.go:117] "RemoveContainer" containerID="cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9" Jan 26 14:29:59 crc kubenswrapper[4922]: E0126 14:29:59.025308 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9\": container with ID starting with cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9 not found: ID does not exist" containerID="cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.025341 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9"} err="failed to get container status \"cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9\": rpc error: code = NotFound desc = could not find container \"cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9\": container with ID starting with cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9 not found: ID does not exist" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.025359 4922 scope.go:117] "RemoveContainer" containerID="1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f" Jan 26 14:29:59 crc kubenswrapper[4922]: E0126 14:29:59.032529 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f\": container with ID starting with 1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f not found: ID does not exist" containerID="1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.032566 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f"} err="failed to get container status \"1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f\": rpc error: code = NotFound desc = could not find container \"1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f\": container with ID starting with 1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f not found: ID does not exist" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.032619 4922 scope.go:117] "RemoveContainer" containerID="f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.066303 4922 scope.go:117] "RemoveContainer" containerID="3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083676 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-httpd-config\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083745 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-internal-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083820 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-config\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083869 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-combined-ca-bundle\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083896 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5569b\" (UniqueName: \"kubernetes.io/projected/f79e2698-4080-4a22-8110-89e8c7217018-kube-api-access-5569b\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083935 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-ovndb-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.083987 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-public-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.084164 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51136c50-be90-4461-a3a1-c68bfb6af203-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.084209 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd13bb86-5407-4a3b-b563-469791214577-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.100256 4922 scope.go:117] "RemoveContainer" containerID="f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d" Jan 26 14:29:59 crc kubenswrapper[4922]: E0126 14:29:59.100759 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d\": container with ID starting with f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d not found: ID does not exist" containerID="f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.100810 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d"} err="failed to get container status \"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d\": rpc error: code = NotFound desc = could not find container \"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d\": container with ID starting with f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d not found: ID does not exist" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.100835 4922 scope.go:117] "RemoveContainer" containerID="3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea" Jan 26 14:29:59 crc kubenswrapper[4922]: E0126 14:29:59.101166 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea\": container with ID starting with 3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea not found: ID does not exist" containerID="3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.101214 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea"} err="failed to get container status \"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea\": rpc error: code = NotFound desc = could not find container \"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea\": container with ID starting with 3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea not found: ID does not exist" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.101242 4922 scope.go:117] "RemoveContainer" containerID="f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.101514 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d"} err="failed to get container status \"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d\": rpc error: code = NotFound desc = could not find container \"f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d\": container with ID starting with f307f3a85b8c6e4baf6526520fe6d66b7b9dcfd34942fe57dc9ed23b4cb3176d not found: ID does not exist" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.101531 4922 scope.go:117] "RemoveContainer" containerID="3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.103149 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea"} err="failed to get container status \"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea\": rpc error: code = NotFound desc = could not find container \"3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea\": container with ID starting with 3768c3c30ca5a31a94549f78df77d43a66eddd946693febb439ffd1569022bea not found: ID does not exist" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.156821 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22670ba0-ab65-49a7-b3f0-928800e10ca1" path="/var/lib/kubelet/pods/22670ba0-ab65-49a7-b3f0-928800e10ca1/volumes" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.186790 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-public-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.186947 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-httpd-config\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.187008 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-internal-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.187070 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-config\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.187133 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-combined-ca-bundle\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.187151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5569b\" (UniqueName: \"kubernetes.io/projected/f79e2698-4080-4a22-8110-89e8c7217018-kube-api-access-5569b\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.187175 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-ovndb-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.193173 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-httpd-config\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.193610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-internal-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.196351 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-public-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.197089 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-ovndb-tls-certs\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.197708 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-combined-ca-bundle\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.214678 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f79e2698-4080-4a22-8110-89e8c7217018-config\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.226627 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5569b\" (UniqueName: \"kubernetes.io/projected/f79e2698-4080-4a22-8110-89e8c7217018-kube-api-access-5569b\") pod \"neutron-75cdcc7857-fs8tr\" (UID: \"f79e2698-4080-4a22-8110-89e8c7217018\") " pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.282081 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b778d8db6-87lbq"] Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.292940 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b778d8db6-87lbq"] Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.305038 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.319252 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.331679 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.338079 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.340335 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.347403 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.347749 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.347913 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.380883 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.453569 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt6mt" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.492525 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkl7\" (UniqueName: \"kubernetes.io/projected/2408f586-2d21-49ee-a728-08b3190483b8-kube-api-access-wzkl7\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.492585 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.492606 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.492638 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-config-data\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.493365 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.493447 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2408f586-2d21-49ee-a728-08b3190483b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.493489 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.493550 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2408f586-2d21-49ee-a728-08b3190483b8-logs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.493636 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-scripts\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.594711 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64mnb\" (UniqueName: \"kubernetes.io/projected/99c8b640-ac97-4a3e-8e4c-1781bd756396-kube-api-access-64mnb\") pod \"99c8b640-ac97-4a3e-8e4c-1781bd756396\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.594781 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-config-data\") pod \"99c8b640-ac97-4a3e-8e4c-1781bd756396\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.594903 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-combined-ca-bundle\") pod \"99c8b640-ac97-4a3e-8e4c-1781bd756396\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.594952 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-db-sync-config-data\") pod \"99c8b640-ac97-4a3e-8e4c-1781bd756396\" (UID: \"99c8b640-ac97-4a3e-8e4c-1781bd756396\") " Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2408f586-2d21-49ee-a728-08b3190483b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595326 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595356 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2408f586-2d21-49ee-a728-08b3190483b8-logs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595396 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-scripts\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595435 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2408f586-2d21-49ee-a728-08b3190483b8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595449 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkl7\" (UniqueName: \"kubernetes.io/projected/2408f586-2d21-49ee-a728-08b3190483b8-kube-api-access-wzkl7\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595599 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595621 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595653 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-config-data\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.595921 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2408f586-2d21-49ee-a728-08b3190483b8-logs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.599840 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c8b640-ac97-4a3e-8e4c-1781bd756396-kube-api-access-64mnb" (OuterVolumeSpecName: "kube-api-access-64mnb") pod "99c8b640-ac97-4a3e-8e4c-1781bd756396" (UID: "99c8b640-ac97-4a3e-8e4c-1781bd756396"). InnerVolumeSpecName "kube-api-access-64mnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.601524 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-scripts\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.614875 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "99c8b640-ac97-4a3e-8e4c-1781bd756396" (UID: "99c8b640-ac97-4a3e-8e4c-1781bd756396"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.615331 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.625628 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.625966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.626337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-config-data\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.626423 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2408f586-2d21-49ee-a728-08b3190483b8-config-data-custom\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.626537 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkl7\" (UniqueName: \"kubernetes.io/projected/2408f586-2d21-49ee-a728-08b3190483b8-kube-api-access-wzkl7\") pod \"cinder-api-0\" (UID: \"2408f586-2d21-49ee-a728-08b3190483b8\") " pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.655878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99c8b640-ac97-4a3e-8e4c-1781bd756396" (UID: "99c8b640-ac97-4a3e-8e4c-1781bd756396"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.669950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.690754 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-config-data" (OuterVolumeSpecName: "config-data") pod "99c8b640-ac97-4a3e-8e4c-1781bd756396" (UID: "99c8b640-ac97-4a3e-8e4c-1781bd756396"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.701083 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64mnb\" (UniqueName: \"kubernetes.io/projected/99c8b640-ac97-4a3e-8e4c-1781bd756396-kube-api-access-64mnb\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.701179 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.701189 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.701197 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c8b640-ac97-4a3e-8e4c-1781bd756396-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:29:59 crc kubenswrapper[4922]: I0126 14:29:59.898711 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75cdcc7857-fs8tr"] Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.022240 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lt6mt" event={"ID":"99c8b640-ac97-4a3e-8e4c-1781bd756396","Type":"ContainerDied","Data":"5602197bdf278ba79ffee25c5deb128f8e661811a33c76ff52a6d28649e97fc7"} Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.022490 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5602197bdf278ba79ffee25c5deb128f8e661811a33c76ff52a6d28649e97fc7" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.022315 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lt6mt" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.025417 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cdcc7857-fs8tr" event={"ID":"f79e2698-4080-4a22-8110-89e8c7217018","Type":"ContainerStarted","Data":"66f934a4402e4be9a5711f019af302f172c1637e8c701955a8613f03163c02ce"} Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.128894 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.163834 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t"] Jan 26 14:30:00 crc kubenswrapper[4922]: E0126 14:30:00.164248 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" containerName="glance-db-sync" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.164263 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" containerName="glance-db-sync" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.164460 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" containerName="glance-db-sync" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.165074 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.168368 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.169517 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.172762 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t"] Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.328417 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-config-volume\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.328463 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tltt\" (UniqueName: \"kubernetes.io/projected/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-kube-api-access-7tltt\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.328537 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-secret-volume\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.430257 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-secret-volume\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.430386 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-config-volume\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.430410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tltt\" (UniqueName: \"kubernetes.io/projected/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-kube-api-access-7tltt\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.431930 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-config-volume\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.435759 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-secret-volume\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.442397 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbb4847c5-48r4w"] Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.451436 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tltt\" (UniqueName: \"kubernetes.io/projected/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-kube-api-access-7tltt\") pod \"collect-profiles-29490630-n988t\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.491547 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.492050 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7759df7475-j7d9x"] Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.493532 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.514081 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7759df7475-j7d9x"] Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.638042 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-sb\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.638410 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsl7m\" (UniqueName: \"kubernetes.io/projected/65b63bfa-549a-4eb1-977a-90b1e119bd9e-kube-api-access-wsl7m\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.638443 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-svc\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.638472 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-config\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.638488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-nb\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.638530 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-swift-storage-0\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.747252 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsl7m\" (UniqueName: \"kubernetes.io/projected/65b63bfa-549a-4eb1-977a-90b1e119bd9e-kube-api-access-wsl7m\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.747357 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-svc\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.747422 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-config\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.747457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-nb\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.747547 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-swift-storage-0\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.747690 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-sb\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.748967 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-sb\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.750067 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-nb\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.750171 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-svc\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.751837 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-swift-storage-0\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.762393 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-config\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.772873 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsl7m\" (UniqueName: \"kubernetes.io/projected/65b63bfa-549a-4eb1-977a-90b1e119bd9e-kube-api-access-wsl7m\") pod \"dnsmasq-dns-7759df7475-j7d9x\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:00 crc kubenswrapper[4922]: I0126 14:30:00.846317 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.088888 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2408f586-2d21-49ee-a728-08b3190483b8","Type":"ContainerStarted","Data":"f49a90f63aab3cfc38065d2c871bd4ffcc64e8c683530807ad0674f791eaedd7"} Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.092757 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerName="dnsmasq-dns" containerID="cri-o://f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167" gracePeriod=10 Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.116607 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51136c50-be90-4461-a3a1-c68bfb6af203" path="/var/lib/kubelet/pods/51136c50-be90-4461-a3a1-c68bfb6af203/volumes" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.117790 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd13bb86-5407-4a3b-b563-469791214577" path="/var/lib/kubelet/pods/cd13bb86-5407-4a3b-b563-469791214577/volumes" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.118796 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cdcc7857-fs8tr" event={"ID":"f79e2698-4080-4a22-8110-89e8c7217018","Type":"ContainerStarted","Data":"3544a4dead9facb2fbb4871a3cd431f2c47b094ff3802f828384169ef401e1b2"} Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.118835 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t"] Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.299283 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.306318 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.313670 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-sxfb2" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.316165 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.316407 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.329118 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.417427 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7759df7475-j7d9x"] Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.486510 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkrq\" (UniqueName: \"kubernetes.io/projected/e722cdee-d18c-4096-9f5e-15ead9a799aa-kube-api-access-pxkrq\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.486865 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-config-data\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.486896 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.487016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.487060 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-scripts\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.487128 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-logs\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.487159 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.578788 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.581109 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.587424 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.594634 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.594714 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-scripts\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.594753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-logs\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.594785 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.594937 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkrq\" (UniqueName: \"kubernetes.io/projected/e722cdee-d18c-4096-9f5e-15ead9a799aa-kube-api-access-pxkrq\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.594969 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-config-data\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.595005 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.595454 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.600422 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-logs\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.601822 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.621001 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.621957 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.665444 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-scripts\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.690939 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-config-data\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.691150 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkrq\" (UniqueName: \"kubernetes.io/projected/e722cdee-d18c-4096-9f5e-15ead9a799aa-kube-api-access-pxkrq\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697180 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697243 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697275 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697320 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697357 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d562s\" (UniqueName: \"kubernetes.io/projected/7d02a681-5355-4fa0-9160-b01649cef2e3-kube-api-access-d562s\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.697399 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.746486 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798726 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798839 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798871 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798902 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798944 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798959 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.798982 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d562s\" (UniqueName: \"kubernetes.io/projected/7d02a681-5355-4fa0-9160-b01649cef2e3-kube-api-access-d562s\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.799145 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.800742 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.800973 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.818934 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.819034 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.819245 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.826295 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d562s\" (UniqueName: \"kubernetes.io/projected/7d02a681-5355-4fa0-9160-b01649cef2e3-kube-api-access-d562s\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.837184 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.955297 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.975513 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:01 crc kubenswrapper[4922]: I0126 14:30:01.981879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.106374 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-sb\") pod \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.106442 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-kube-api-access-566nw\") pod \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.106515 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-nb\") pod \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.106563 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-swift-storage-0\") pod \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.106620 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-svc\") pod \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.106679 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-config\") pod \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\" (UID: \"36b36d7c-c277-4c60-9408-3a2bf41cfc7d\") " Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.121325 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-kube-api-access-566nw" (OuterVolumeSpecName: "kube-api-access-566nw") pod "36b36d7c-c277-4c60-9408-3a2bf41cfc7d" (UID: "36b36d7c-c277-4c60-9408-3a2bf41cfc7d"). InnerVolumeSpecName "kube-api-access-566nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.127874 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cdcc7857-fs8tr" event={"ID":"f79e2698-4080-4a22-8110-89e8c7217018","Type":"ContainerStarted","Data":"707692c12873888871eb422919319f2455ad525333bb26426bc21a1483d6a465"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.128668 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.130893 4922 generic.go:334] "Generic (PLEG): container finished" podID="22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" containerID="46caf6b90014d6bf9203f6151a4c9713a97eb0cdfaa588b6839d966e581805db" exitCode=0 Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.130944 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" event={"ID":"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b","Type":"ContainerDied","Data":"46caf6b90014d6bf9203f6151a4c9713a97eb0cdfaa588b6839d966e581805db"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.130967 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" event={"ID":"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b","Type":"ContainerStarted","Data":"cfabed65a2ae7a183d552b8cbedb2f67875bf7fba153eea169c4d766ed222822"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.158262 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75cdcc7857-fs8tr" podStartSLOduration=4.158242634 podStartE2EDuration="4.158242634s" podCreationTimestamp="2026-01-26 14:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:02.154276891 +0000 UTC m=+1219.356539663" watchObservedRunningTime="2026-01-26 14:30:02.158242634 +0000 UTC m=+1219.360505406" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.188392 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.188428 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2408f586-2d21-49ee-a728-08b3190483b8","Type":"ContainerStarted","Data":"b1bad8ee41d67773e4a02e7191bd8b26d99ea67aff55f29ac82b571f654210ea"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.204737 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "36b36d7c-c277-4c60-9408-3a2bf41cfc7d" (UID: "36b36d7c-c277-4c60-9408-3a2bf41cfc7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.208477 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36b36d7c-c277-4c60-9408-3a2bf41cfc7d" (UID: "36b36d7c-c277-4c60-9408-3a2bf41cfc7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.209314 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.209344 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.209357 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-kube-api-access-566nw\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.212689 4922 generic.go:334] "Generic (PLEG): container finished" podID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerID="278eecfd1f8c2211e90c826fed7122f534fc221352ac912e81883f9972117304" exitCode=0 Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.212771 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" event={"ID":"65b63bfa-549a-4eb1-977a-90b1e119bd9e","Type":"ContainerDied","Data":"278eecfd1f8c2211e90c826fed7122f534fc221352ac912e81883f9972117304"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.212796 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" event={"ID":"65b63bfa-549a-4eb1-977a-90b1e119bd9e","Type":"ContainerStarted","Data":"eb52340f1b4eb79829a5e6c1386b1b83e96803a64ed8a99458da44232a07fdfa"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.232619 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-config" (OuterVolumeSpecName: "config") pod "36b36d7c-c277-4c60-9408-3a2bf41cfc7d" (UID: "36b36d7c-c277-4c60-9408-3a2bf41cfc7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.237133 4922 generic.go:334] "Generic (PLEG): container finished" podID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerID="f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167" exitCode=0 Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.237179 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" event={"ID":"36b36d7c-c277-4c60-9408-3a2bf41cfc7d","Type":"ContainerDied","Data":"f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.237204 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" event={"ID":"36b36d7c-c277-4c60-9408-3a2bf41cfc7d","Type":"ContainerDied","Data":"c5b0c5e6ad76192313cbe7098200c43f14f21446fd6a3d755ce06f3bcaca138d"} Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.237212 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbb4847c5-48r4w" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.237221 4922 scope.go:117] "RemoveContainer" containerID="f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.254195 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36b36d7c-c277-4c60-9408-3a2bf41cfc7d" (UID: "36b36d7c-c277-4c60-9408-3a2bf41cfc7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.266991 4922 scope.go:117] "RemoveContainer" containerID="b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.287794 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36b36d7c-c277-4c60-9408-3a2bf41cfc7d" (UID: "36b36d7c-c277-4c60-9408-3a2bf41cfc7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.310760 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.310791 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.310804 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b36d7c-c277-4c60-9408-3a2bf41cfc7d-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.313365 4922 scope.go:117] "RemoveContainer" containerID="f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167" Jan 26 14:30:02 crc kubenswrapper[4922]: E0126 14:30:02.314558 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167\": container with ID starting with f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167 not found: ID does not exist" containerID="f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.314587 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167"} err="failed to get container status \"f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167\": rpc error: code = NotFound desc = could not find container \"f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167\": container with ID starting with f6d0ab4ddb0031cee8f7d3932abd5638c6d811c9aacd76d82836050da1c5a167 not found: ID does not exist" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.314607 4922 scope.go:117] "RemoveContainer" containerID="b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb" Jan 26 14:30:02 crc kubenswrapper[4922]: E0126 14:30:02.314933 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb\": container with ID starting with b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb not found: ID does not exist" containerID="b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.314955 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb"} err="failed to get container status \"b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb\": rpc error: code = NotFound desc = could not find container \"b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb\": container with ID starting with b4b85998206823dc6c0f1556dec050fc12e9725124d902b7ee22dd856629cdcb not found: ID does not exist" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.435223 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.705381 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbb4847c5-48r4w"] Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.742751 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbb4847c5-48r4w"] Jan 26 14:30:02 crc kubenswrapper[4922]: I0126 14:30:02.767823 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.105940 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" path="/var/lib/kubelet/pods/36b36d7c-c277-4c60-9408-3a2bf41cfc7d/volumes" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.253111 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2408f586-2d21-49ee-a728-08b3190483b8","Type":"ContainerStarted","Data":"3067087c38cc3a89625a0e7665793e7084fa3c8d78a16095185bad66e6543453"} Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.254273 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.257999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" event={"ID":"65b63bfa-549a-4eb1-977a-90b1e119bd9e","Type":"ContainerStarted","Data":"cab7de368f7660601c39d19f3f6e9d601847d598912537df722d070df6316366"} Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.258412 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.259622 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e722cdee-d18c-4096-9f5e-15ead9a799aa","Type":"ContainerStarted","Data":"4a9f0fc983e34b19c589c995cc98cab09a798e6c4cd273ec07f52a44829a0513"} Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.288399 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.288376938 podStartE2EDuration="4.288376938s" podCreationTimestamp="2026-01-26 14:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:03.270580767 +0000 UTC m=+1220.472843539" watchObservedRunningTime="2026-01-26 14:30:03.288376938 +0000 UTC m=+1220.490639700" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.296209 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" podStartSLOduration=3.296187828 podStartE2EDuration="3.296187828s" podCreationTimestamp="2026-01-26 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:03.285628601 +0000 UTC m=+1220.487891383" watchObservedRunningTime="2026-01-26 14:30:03.296187828 +0000 UTC m=+1220.498450590" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.334567 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.558207 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.559150 4922 scope.go:117] "RemoveContainer" containerID="2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.571187 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:03 crc kubenswrapper[4922]: I0126 14:30:03.703624 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.078927 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.271714 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-config-volume\") pod \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.271784 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tltt\" (UniqueName: \"kubernetes.io/projected/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-kube-api-access-7tltt\") pod \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.271881 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-secret-volume\") pod \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\" (UID: \"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b\") " Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.272476 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" (UID: "22e391b4-ed5e-4fb3-828e-9b9f06d55b6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.278989 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-kube-api-access-7tltt" (OuterVolumeSpecName: "kube-api-access-7tltt") pod "22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" (UID: "22e391b4-ed5e-4fb3-828e-9b9f06d55b6b"). InnerVolumeSpecName "kube-api-access-7tltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.279603 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" (UID: "22e391b4-ed5e-4fb3-828e-9b9f06d55b6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.289792 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" event={"ID":"22e391b4-ed5e-4fb3-828e-9b9f06d55b6b","Type":"ContainerDied","Data":"cfabed65a2ae7a183d552b8cbedb2f67875bf7fba153eea169c4d766ed222822"} Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.289825 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfabed65a2ae7a183d552b8cbedb2f67875bf7fba153eea169c4d766ed222822" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.289879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.301475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e722cdee-d18c-4096-9f5e-15ead9a799aa","Type":"ContainerStarted","Data":"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5"} Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.306397 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerStarted","Data":"1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d"} Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.312162 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="cinder-scheduler" containerID="cri-o://57b5d0028587b97dcdb8dac99dfbc147ac07958eac9dee99787e3c843fec14b1" gracePeriod=30 Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.313689 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d02a681-5355-4fa0-9160-b01649cef2e3","Type":"ContainerStarted","Data":"4873fbbac82b5255e6ba996859a2e186ef125415ae18d922aa6364eeca36ebe1"} Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.313903 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="probe" containerID="cri-o://14adc38e5ac5b91e7a65cffe8cb2083a5e7c101a67c3e452eca95cd5bf88d8aa" gracePeriod=30 Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.374372 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.374406 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.374420 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tltt\" (UniqueName: \"kubernetes.io/projected/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b-kube-api-access-7tltt\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.772320 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:04 crc kubenswrapper[4922]: I0126 14:30:04.852014 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.040594 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.329517 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e722cdee-d18c-4096-9f5e-15ead9a799aa","Type":"ContainerStarted","Data":"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848"} Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.329718 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-httpd" containerID="cri-o://3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848" gracePeriod=30 Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.329659 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-log" containerID="cri-o://f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5" gracePeriod=30 Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.338344 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d02a681-5355-4fa0-9160-b01649cef2e3","Type":"ContainerStarted","Data":"374e37de9c0bcf42f3519f177bd616af8a3794e50e455e0e4f63b622d3b52fa0"} Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.379488 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c4549854d-f2kpv" Jan 26 14:30:05 crc kubenswrapper[4922]: I0126 14:30:05.386812 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.386793281 podStartE2EDuration="5.386793281s" podCreationTimestamp="2026-01-26 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:05.379435594 +0000 UTC m=+1222.581698366" watchObservedRunningTime="2026-01-26 14:30:05.386793281 +0000 UTC m=+1222.589056053" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.117395 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.217697 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxkrq\" (UniqueName: \"kubernetes.io/projected/e722cdee-d18c-4096-9f5e-15ead9a799aa-kube-api-access-pxkrq\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.217963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.218189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-logs\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.218342 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-scripts\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.218465 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-httpd-run\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.218573 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-config-data\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.218665 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-combined-ca-bundle\") pod \"e722cdee-d18c-4096-9f5e-15ead9a799aa\" (UID: \"e722cdee-d18c-4096-9f5e-15ead9a799aa\") " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.218931 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.219183 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-logs" (OuterVolumeSpecName: "logs") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.219515 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.219588 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e722cdee-d18c-4096-9f5e-15ead9a799aa-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.225178 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-scripts" (OuterVolumeSpecName: "scripts") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.225199 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.231047 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e722cdee-d18c-4096-9f5e-15ead9a799aa-kube-api-access-pxkrq" (OuterVolumeSpecName: "kube-api-access-pxkrq") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "kube-api-access-pxkrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.249832 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.298224 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-config-data" (OuterVolumeSpecName: "config-data") pod "e722cdee-d18c-4096-9f5e-15ead9a799aa" (UID: "e722cdee-d18c-4096-9f5e-15ead9a799aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.321002 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.321044 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.321066 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxkrq\" (UniqueName: \"kubernetes.io/projected/e722cdee-d18c-4096-9f5e-15ead9a799aa-kube-api-access-pxkrq\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.321120 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.321133 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e722cdee-d18c-4096-9f5e-15ead9a799aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.353187 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d02a681-5355-4fa0-9160-b01649cef2e3","Type":"ContainerStarted","Data":"21e6cc0be0b23cc565c045ece66213939bcf48b4fa5823a0227ddb1179bbda6d"} Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.353509 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.353718 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-log" containerID="cri-o://374e37de9c0bcf42f3519f177bd616af8a3794e50e455e0e4f63b622d3b52fa0" gracePeriod=30 Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.354205 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-httpd" containerID="cri-o://21e6cc0be0b23cc565c045ece66213939bcf48b4fa5823a0227ddb1179bbda6d" gracePeriod=30 Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.373546 4922 generic.go:334] "Generic (PLEG): container finished" podID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerID="14adc38e5ac5b91e7a65cffe8cb2083a5e7c101a67c3e452eca95cd5bf88d8aa" exitCode=0 Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.373727 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b68caf8c-1863-4437-8ed2-5123d9a14db8","Type":"ContainerDied","Data":"14adc38e5ac5b91e7a65cffe8cb2083a5e7c101a67c3e452eca95cd5bf88d8aa"} Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.376163 4922 generic.go:334] "Generic (PLEG): container finished" podID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerID="3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848" exitCode=143 Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.376268 4922 generic.go:334] "Generic (PLEG): container finished" podID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerID="f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5" exitCode=143 Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.376350 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e722cdee-d18c-4096-9f5e-15ead9a799aa","Type":"ContainerDied","Data":"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848"} Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.376466 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e722cdee-d18c-4096-9f5e-15ead9a799aa","Type":"ContainerDied","Data":"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5"} Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.376547 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e722cdee-d18c-4096-9f5e-15ead9a799aa","Type":"ContainerDied","Data":"4a9f0fc983e34b19c589c995cc98cab09a798e6c4cd273ec07f52a44829a0513"} Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.377055 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.377075 4922 scope.go:117] "RemoveContainer" containerID="3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.381533 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.381517673 podStartE2EDuration="6.381517673s" podCreationTimestamp="2026-01-26 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:06.377002456 +0000 UTC m=+1223.579265238" watchObservedRunningTime="2026-01-26 14:30:06.381517673 +0000 UTC m=+1223.583780445" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.411303 4922 scope.go:117] "RemoveContainer" containerID="f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.427219 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.432740 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.440607 4922 scope.go:117] "RemoveContainer" containerID="3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848" Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.441011 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848\": container with ID starting with 3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848 not found: ID does not exist" containerID="3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.441050 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848"} err="failed to get container status \"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848\": rpc error: code = NotFound desc = could not find container \"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848\": container with ID starting with 3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848 not found: ID does not exist" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.441112 4922 scope.go:117] "RemoveContainer" containerID="f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5" Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.442456 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5\": container with ID starting with f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5 not found: ID does not exist" containerID="f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.442484 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5"} err="failed to get container status \"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5\": rpc error: code = NotFound desc = could not find container \"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5\": container with ID starting with f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5 not found: ID does not exist" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.442507 4922 scope.go:117] "RemoveContainer" containerID="3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.444686 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.444884 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848"} err="failed to get container status \"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848\": rpc error: code = NotFound desc = could not find container \"3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848\": container with ID starting with 3a645f21b4f254256289ab878dea540895d25f88d020c2dc45630bf158008848 not found: ID does not exist" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.444956 4922 scope.go:117] "RemoveContainer" containerID="f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.446085 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5"} err="failed to get container status \"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5\": rpc error: code = NotFound desc = could not find container \"f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5\": container with ID starting with f1b4ce447208a330a0dcc19623320b0bd0a90ef0b0bf908cd2e86a918e5682b5 not found: ID does not exist" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.459734 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.460113 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-log" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460129 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-log" Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.460154 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-httpd" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460161 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-httpd" Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.460171 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerName="init" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460177 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerName="init" Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.460189 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" containerName="collect-profiles" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460194 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" containerName="collect-profiles" Jan 26 14:30:06 crc kubenswrapper[4922]: E0126 14:30:06.460203 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerName="dnsmasq-dns" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460208 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerName="dnsmasq-dns" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460370 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b36d7c-c277-4c60-9408-3a2bf41cfc7d" containerName="dnsmasq-dns" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460381 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" containerName="collect-profiles" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460390 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-log" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.460402 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" containerName="glance-httpd" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.461453 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.465481 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.465705 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.492364 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.633939 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.633998 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78m8q\" (UniqueName: \"kubernetes.io/projected/4a1d1b1d-b797-496c-b42e-d4b66f59115c-kube-api-access-78m8q\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.634067 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.634114 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.634131 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-logs\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.634148 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.634207 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.634223 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.671507 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b4f749b44-2qdw7" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.671615 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.735811 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.735862 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-logs\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.735886 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.735977 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.735999 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.736054 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.736095 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78m8q\" (UniqueName: \"kubernetes.io/projected/4a1d1b1d-b797-496c-b42e-d4b66f59115c-kube-api-access-78m8q\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.736156 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.736348 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.744037 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.744172 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-logs\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.762183 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.762457 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.763943 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-config-data\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.773350 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.774165 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-scripts\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.781454 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78m8q\" (UniqueName: \"kubernetes.io/projected/4a1d1b1d-b797-496c-b42e-d4b66f59115c-kube-api-access-78m8q\") pod \"glance-default-external-api-0\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:06 crc kubenswrapper[4922]: I0126 14:30:06.791731 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.015328 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7cd8b7c676-rg4sd" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.128671 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e722cdee-d18c-4096-9f5e-15ead9a799aa" path="/var/lib/kubelet/pods/e722cdee-d18c-4096-9f5e-15ead9a799aa/volumes" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.418576 4922 generic.go:334] "Generic (PLEG): container finished" podID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerID="21e6cc0be0b23cc565c045ece66213939bcf48b4fa5823a0227ddb1179bbda6d" exitCode=0 Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.418801 4922 generic.go:334] "Generic (PLEG): container finished" podID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerID="374e37de9c0bcf42f3519f177bd616af8a3794e50e455e0e4f63b622d3b52fa0" exitCode=143 Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.418821 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d02a681-5355-4fa0-9160-b01649cef2e3","Type":"ContainerDied","Data":"21e6cc0be0b23cc565c045ece66213939bcf48b4fa5823a0227ddb1179bbda6d"} Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.418843 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d02a681-5355-4fa0-9160-b01649cef2e3","Type":"ContainerDied","Data":"374e37de9c0bcf42f3519f177bd616af8a3794e50e455e0e4f63b622d3b52fa0"} Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.478303 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.539239 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.668893 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-httpd-run\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669310 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-combined-ca-bundle\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669335 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d562s\" (UniqueName: \"kubernetes.io/projected/7d02a681-5355-4fa0-9160-b01649cef2e3-kube-api-access-d562s\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669370 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-logs\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669401 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-config-data\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669443 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-scripts\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669555 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"7d02a681-5355-4fa0-9160-b01649cef2e3\" (UID: \"7d02a681-5355-4fa0-9160-b01649cef2e3\") " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669562 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.669912 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.670521 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-logs" (OuterVolumeSpecName: "logs") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.676852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d02a681-5355-4fa0-9160-b01649cef2e3-kube-api-access-d562s" (OuterVolumeSpecName: "kube-api-access-d562s") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "kube-api-access-d562s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.676850 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-scripts" (OuterVolumeSpecName: "scripts") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.709596 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.711462 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.760768 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-config-data" (OuterVolumeSpecName: "config-data") pod "7d02a681-5355-4fa0-9160-b01649cef2e3" (UID: "7d02a681-5355-4fa0-9160-b01649cef2e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.771362 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.771391 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d562s\" (UniqueName: \"kubernetes.io/projected/7d02a681-5355-4fa0-9160-b01649cef2e3-kube-api-access-d562s\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.771402 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d02a681-5355-4fa0-9160-b01649cef2e3-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.771410 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.771418 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d02a681-5355-4fa0-9160-b01649cef2e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.771441 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.792544 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 14:30:07 crc kubenswrapper[4922]: I0126 14:30:07.873223 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.257799 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:08 crc kubenswrapper[4922]: E0126 14:30:08.258476 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-httpd" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.258493 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-httpd" Jan 26 14:30:08 crc kubenswrapper[4922]: E0126 14:30:08.258516 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-log" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.258523 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-log" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.258676 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-httpd" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.258705 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" containerName="glance-log" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.259371 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.262618 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.262639 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4rqdx" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.262767 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.282561 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.384775 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config-secret\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.384874 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-combined-ca-bundle\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.385131 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.385183 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7cg\" (UniqueName: \"kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.425324 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:08 crc kubenswrapper[4922]: E0126 14:30:08.426053 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-9w7cg openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="15299c8c-add1-43d2-8a31-20e930c41508" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.435385 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.455233 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.456504 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.465185 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.487855 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config-secret\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.487922 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-combined-ca-bundle\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.488026 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.488056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7cg\" (UniqueName: \"kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.489539 4922 generic.go:334] "Generic (PLEG): container finished" podID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerID="57b5d0028587b97dcdb8dac99dfbc147ac07958eac9dee99787e3c843fec14b1" exitCode=0 Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.489623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b68caf8c-1863-4437-8ed2-5123d9a14db8","Type":"ContainerDied","Data":"57b5d0028587b97dcdb8dac99dfbc147ac07958eac9dee99787e3c843fec14b1"} Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.489707 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: E0126 14:30:08.489869 4922 projected.go:194] Error preparing data for projected volume kube-api-access-9w7cg for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (15299c8c-add1-43d2-8a31-20e930c41508) does not match the UID in record. The object might have been deleted and then recreated Jan 26 14:30:08 crc kubenswrapper[4922]: E0126 14:30:08.489956 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg podName:15299c8c-add1-43d2-8a31-20e930c41508 nodeName:}" failed. No retries permitted until 2026-01-26 14:30:08.989937797 +0000 UTC m=+1226.192200569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9w7cg" (UniqueName: "kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg") pod "openstackclient" (UID: "15299c8c-add1-43d2-8a31-20e930c41508") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (15299c8c-add1-43d2-8a31-20e930c41508) does not match the UID in record. The object might have been deleted and then recreated Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.497630 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config-secret\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.498314 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-combined-ca-bundle\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.499867 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a1d1b1d-b797-496c-b42e-d4b66f59115c","Type":"ContainerStarted","Data":"be148c1a2cbb208f14758738600f7170e55f9f18a3c3240a5d7315b021c0c849"} Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.499909 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a1d1b1d-b797-496c-b42e-d4b66f59115c","Type":"ContainerStarted","Data":"8071cb8d07cbf7cd136fde30e7dae0dbfdbf62759b5353be5bad0895eeabd8df"} Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.504660 4922 generic.go:334] "Generic (PLEG): container finished" podID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerID="1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d" exitCode=1 Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.504717 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerDied","Data":"1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d"} Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.504753 4922 scope.go:117] "RemoveContainer" containerID="2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.505389 4922 scope.go:117] "RemoveContainer" containerID="1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d" Jan 26 14:30:08 crc kubenswrapper[4922]: E0126 14:30:08.505638 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(1678095e-0a1d-4199-90c6-ea3afc879e0b)\"" pod="openstack/watcher-decision-engine-0" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.515329 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d02a681-5355-4fa0-9160-b01649cef2e3","Type":"ContainerDied","Data":"4873fbbac82b5255e6ba996859a2e186ef125415ae18d922aa6364eeca36ebe1"} Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.515658 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.575556 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.587923 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.591195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c962db74-b70e-44df-a3d2-8a2dda688ca8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.591237 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg67m\" (UniqueName: \"kubernetes.io/projected/c962db74-b70e-44df-a3d2-8a2dda688ca8-kube-api-access-dg67m\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.591323 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c962db74-b70e-44df-a3d2-8a2dda688ca8-openstack-config\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.591402 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c962db74-b70e-44df-a3d2-8a2dda688ca8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.604495 4922 scope.go:117] "RemoveContainer" containerID="21e6cc0be0b23cc565c045ece66213939bcf48b4fa5823a0227ddb1179bbda6d" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.614692 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.626837 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.626951 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.629998 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.631290 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.693121 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c962db74-b70e-44df-a3d2-8a2dda688ca8-openstack-config\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.693235 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c962db74-b70e-44df-a3d2-8a2dda688ca8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.693276 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c962db74-b70e-44df-a3d2-8a2dda688ca8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.693303 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg67m\" (UniqueName: \"kubernetes.io/projected/c962db74-b70e-44df-a3d2-8a2dda688ca8-kube-api-access-dg67m\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.694866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c962db74-b70e-44df-a3d2-8a2dda688ca8-openstack-config\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.697414 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c962db74-b70e-44df-a3d2-8a2dda688ca8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.697527 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c962db74-b70e-44df-a3d2-8a2dda688ca8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.709678 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg67m\" (UniqueName: \"kubernetes.io/projected/c962db74-b70e-44df-a3d2-8a2dda688ca8-kube-api-access-dg67m\") pod \"openstackclient\" (UID: \"c962db74-b70e-44df-a3d2-8a2dda688ca8\") " pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.738387 4922 scope.go:117] "RemoveContainer" containerID="374e37de9c0bcf42f3519f177bd616af8a3794e50e455e0e4f63b622d3b52fa0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.796908 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.797182 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtzp\" (UniqueName: \"kubernetes.io/projected/00524d24-79f3-444a-b95b-1cb294892c78-kube-api-access-pmtzp\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.797354 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-logs\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.797411 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.797474 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.797540 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.797945 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.798594 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.810995 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.814522 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.900189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-896bq\" (UniqueName: \"kubernetes.io/projected/b68caf8c-1863-4437-8ed2-5123d9a14db8-kube-api-access-896bq\") pod \"b68caf8c-1863-4437-8ed2-5123d9a14db8\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.900435 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data\") pod \"b68caf8c-1863-4437-8ed2-5123d9a14db8\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.900487 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-combined-ca-bundle\") pod \"b68caf8c-1863-4437-8ed2-5123d9a14db8\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.900566 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data-custom\") pod \"b68caf8c-1863-4437-8ed2-5123d9a14db8\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.900585 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-scripts\") pod \"b68caf8c-1863-4437-8ed2-5123d9a14db8\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.900721 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b68caf8c-1863-4437-8ed2-5123d9a14db8-etc-machine-id\") pod \"b68caf8c-1863-4437-8ed2-5123d9a14db8\" (UID: \"b68caf8c-1863-4437-8ed2-5123d9a14db8\") " Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901001 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901060 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901131 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901152 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901196 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtzp\" (UniqueName: \"kubernetes.io/projected/00524d24-79f3-444a-b95b-1cb294892c78-kube-api-access-pmtzp\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901237 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-logs\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901256 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901285 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.901672 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.903574 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b68caf8c-1863-4437-8ed2-5123d9a14db8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b68caf8c-1863-4437-8ed2-5123d9a14db8" (UID: "b68caf8c-1863-4437-8ed2-5123d9a14db8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.903623 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-logs\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.906105 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.922307 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-scripts" (OuterVolumeSpecName: "scripts") pod "b68caf8c-1863-4437-8ed2-5123d9a14db8" (UID: "b68caf8c-1863-4437-8ed2-5123d9a14db8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.933172 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.935708 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.941993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtzp\" (UniqueName: \"kubernetes.io/projected/00524d24-79f3-444a-b95b-1cb294892c78-kube-api-access-pmtzp\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.945355 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68caf8c-1863-4437-8ed2-5123d9a14db8-kube-api-access-896bq" (OuterVolumeSpecName: "kube-api-access-896bq") pod "b68caf8c-1863-4437-8ed2-5123d9a14db8" (UID: "b68caf8c-1863-4437-8ed2-5123d9a14db8"). InnerVolumeSpecName "kube-api-access-896bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.948757 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.964025 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b68caf8c-1863-4437-8ed2-5123d9a14db8" (UID: "b68caf8c-1863-4437-8ed2-5123d9a14db8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:08 crc kubenswrapper[4922]: I0126 14:30:08.972095 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.004787 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7cg\" (UniqueName: \"kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg\") pod \"openstackclient\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " pod="openstack/openstackclient" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.004884 4922 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.004895 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.004903 4922 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b68caf8c-1863-4437-8ed2-5123d9a14db8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.004912 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-896bq\" (UniqueName: \"kubernetes.io/projected/b68caf8c-1863-4437-8ed2-5123d9a14db8-kube-api-access-896bq\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: E0126 14:30:09.009324 4922 projected.go:194] Error preparing data for projected volume kube-api-access-9w7cg for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (15299c8c-add1-43d2-8a31-20e930c41508) does not match the UID in record. The object might have been deleted and then recreated Jan 26 14:30:09 crc kubenswrapper[4922]: E0126 14:30:09.009393 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg podName:15299c8c-add1-43d2-8a31-20e930c41508 nodeName:}" failed. No retries permitted until 2026-01-26 14:30:10.009374425 +0000 UTC m=+1227.211637197 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9w7cg" (UniqueName: "kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg") pod "openstackclient" (UID: "15299c8c-add1-43d2-8a31-20e930c41508") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (15299c8c-add1-43d2-8a31-20e930c41508) does not match the UID in record. The object might have been deleted and then recreated Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.038233 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.083129 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b68caf8c-1863-4437-8ed2-5123d9a14db8" (UID: "b68caf8c-1863-4437-8ed2-5123d9a14db8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.102755 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.106711 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.117173 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d02a681-5355-4fa0-9160-b01649cef2e3" path="/var/lib/kubelet/pods/7d02a681-5355-4fa0-9160-b01649cef2e3/volumes" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.167581 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data" (OuterVolumeSpecName: "config-data") pod "b68caf8c-1863-4437-8ed2-5123d9a14db8" (UID: "b68caf8c-1863-4437-8ed2-5123d9a14db8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.209808 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b68caf8c-1863-4437-8ed2-5123d9a14db8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.383068 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.536822 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b68caf8c-1863-4437-8ed2-5123d9a14db8","Type":"ContainerDied","Data":"3c9a4091b0a911171b52cef765688fcd91f4d0218953001771b7d6b1e73f4c2c"} Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.536866 4922 scope.go:117] "RemoveContainer" containerID="14adc38e5ac5b91e7a65cffe8cb2083a5e7c101a67c3e452eca95cd5bf88d8aa" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.536997 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.559653 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a1d1b1d-b797-496c-b42e-d4b66f59115c","Type":"ContainerStarted","Data":"97c12d126b31eb8fbe57a841922a2a1ebe3236d06bf9788ae8ef3ec250d47c8e"} Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.568192 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.568322 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c962db74-b70e-44df-a3d2-8a2dda688ca8","Type":"ContainerStarted","Data":"783b3a03b1904a4e8cf7635e1d44a6bbb9e36b321ff6f58609b04b94c159e509"} Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.596698 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.5966787140000003 podStartE2EDuration="3.596678714s" podCreationTimestamp="2026-01-26 14:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:09.578235964 +0000 UTC m=+1226.780498726" watchObservedRunningTime="2026-01-26 14:30:09.596678714 +0000 UTC m=+1226.798941476" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.604486 4922 scope.go:117] "RemoveContainer" containerID="57b5d0028587b97dcdb8dac99dfbc147ac07958eac9dee99787e3c843fec14b1" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.624574 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.662126 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.671380 4922 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="15299c8c-add1-43d2-8a31-20e930c41508" podUID="c962db74-b70e-44df-a3d2-8a2dda688ca8" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.683241 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.708139 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:30:09 crc kubenswrapper[4922]: E0126 14:30:09.708608 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="probe" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.708633 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="probe" Jan 26 14:30:09 crc kubenswrapper[4922]: E0126 14:30:09.708663 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="cinder-scheduler" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.708673 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="cinder-scheduler" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.708890 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="cinder-scheduler" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.708910 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" containerName="probe" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.709985 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.712942 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.717254 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.723812 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.825189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config-secret\") pod \"15299c8c-add1-43d2-8a31-20e930c41508\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.825378 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-combined-ca-bundle\") pod \"15299c8c-add1-43d2-8a31-20e930c41508\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.825405 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config\") pod \"15299c8c-add1-43d2-8a31-20e930c41508\" (UID: \"15299c8c-add1-43d2-8a31-20e930c41508\") " Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826150 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9456dfb4-60f4-440a-b11c-aef57ca86762-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826258 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpwg\" (UniqueName: \"kubernetes.io/projected/9456dfb4-60f4-440a-b11c-aef57ca86762-kube-api-access-6mpwg\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826300 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-scripts\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826343 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826383 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826424 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-config-data\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826415 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "15299c8c-add1-43d2-8a31-20e930c41508" (UID: "15299c8c-add1-43d2-8a31-20e930c41508"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.826503 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7cg\" (UniqueName: \"kubernetes.io/projected/15299c8c-add1-43d2-8a31-20e930c41508-kube-api-access-9w7cg\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.832982 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15299c8c-add1-43d2-8a31-20e930c41508" (UID: "15299c8c-add1-43d2-8a31-20e930c41508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.849252 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "15299c8c-add1-43d2-8a31-20e930c41508" (UID: "15299c8c-add1-43d2-8a31-20e930c41508"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.929846 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-config-data\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.929962 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9456dfb4-60f4-440a-b11c-aef57ca86762-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930018 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mpwg\" (UniqueName: \"kubernetes.io/projected/9456dfb4-60f4-440a-b11c-aef57ca86762-kube-api-access-6mpwg\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930102 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-scripts\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930183 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930262 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930362 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930383 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15299c8c-add1-43d2-8a31-20e930c41508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.930543 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15299c8c-add1-43d2-8a31-20e930c41508-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.936172 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9456dfb4-60f4-440a-b11c-aef57ca86762-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.954672 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mpwg\" (UniqueName: \"kubernetes.io/projected/9456dfb4-60f4-440a-b11c-aef57ca86762-kube-api-access-6mpwg\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.955604 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.955736 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-scripts\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.957926 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:09 crc kubenswrapper[4922]: I0126 14:30:09.959900 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456dfb4-60f4-440a-b11c-aef57ca86762-config-data\") pod \"cinder-scheduler-0\" (UID: \"9456dfb4-60f4-440a-b11c-aef57ca86762\") " pod="openstack/cinder-scheduler-0" Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.078534 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.580017 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00524d24-79f3-444a-b95b-1cb294892c78","Type":"ContainerStarted","Data":"de114b6e89954adbd7951400b69a850f023d66b3c83c1a444b73a52fddd6e339"} Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.580477 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00524d24-79f3-444a-b95b-1cb294892c78","Type":"ContainerStarted","Data":"dd2ac07e4b6daf81c1f72fd05663a09ce881c42e2b562d749f540b4c2c2c7bd8"} Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.581548 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.658959 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.843803 4922 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="15299c8c-add1-43d2-8a31-20e930c41508" podUID="c962db74-b70e-44df-a3d2-8a2dda688ca8" Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.848378 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.909252 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4f48c7-pfcfp"] Jan 26 14:30:10 crc kubenswrapper[4922]: I0126 14:30:10.910263 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerName="dnsmasq-dns" containerID="cri-o://571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057" gracePeriod=10 Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.104564 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15299c8c-add1-43d2-8a31-20e930c41508" path="/var/lib/kubelet/pods/15299c8c-add1-43d2-8a31-20e930c41508/volumes" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.104947 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68caf8c-1863-4437-8ed2-5123d9a14db8" path="/var/lib/kubelet/pods/b68caf8c-1863-4437-8ed2-5123d9a14db8/volumes" Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.310985 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13bb86_5407_4a3b_b563_469791214577.slice/crio-1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13bb86_5407_4a3b_b563_469791214577.slice/crio-1d9b75a2a8e61166502189189c741fb7ceead5ee4d3b3abd178dbd7c01791b4f.scope: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.311559 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4798bf8_f62b_4f88_8a42_4f33ee79eeaa.slice/crio-0a82e0b3537a1b01ba3d756d702288dba9ecae822fbb9b79e29b21531c9d62de.scope WatchSource:0}: Error finding container 0a82e0b3537a1b01ba3d756d702288dba9ecae822fbb9b79e29b21531c9d62de: Status 404 returned error can't find the container with id 0a82e0b3537a1b01ba3d756d702288dba9ecae822fbb9b79e29b21531c9d62de Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.311630 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4798bf8_f62b_4f88_8a42_4f33ee79eeaa.slice/crio-b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4798bf8_f62b_4f88_8a42_4f33ee79eeaa.slice/crio-b66757c71c460f451103d0a0037a31a4d528be992a78412ab84286759b87f132.scope: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.311833 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1678095e_0a1d_4199_90c6_ea3afc879e0b.slice/crio-2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855.scope WatchSource:0}: Error finding container 2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855: Status 404 returned error can't find the container with id 2bc2d64467898d354b6bf2578a25bfdc75f82888bd7665d8024090c44ba97855 Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.312209 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4483d7ac_397e_4220_82f3_c6832fe69c2e.slice/crio-65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e.scope WatchSource:0}: Error finding container 65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e: Status 404 returned error can't find the container with id 65296c46963f7ba742d40f4347e3dc3a33e44afab4d365f9f751cb278c3d4d5e Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.312345 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13bb86_5407_4a3b_b563_469791214577.slice/crio-conmon-cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13bb86_5407_4a3b_b563_469791214577.slice/crio-conmon-cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9.scope: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.312583 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13bb86_5407_4a3b_b563_469791214577.slice/crio-cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13bb86_5407_4a3b_b563_469791214577.slice/crio-cd8179a6fa8c99d795b41d2f0e986226dff40508f652ee3c67765e4092ff02b9.scope: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.327712 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb68caf8c_1863_4437_8ed2_5123d9a14db8.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb68caf8c_1863_4437_8ed2_5123d9a14db8.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.327748 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22670ba0_ab65_49a7_b3f0_928800e10ca1.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22670ba0_ab65_49a7_b3f0_928800e10ca1.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.327801 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51136c50_be90_4461_a3a1_c68bfb6af203.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51136c50_be90_4461_a3a1_c68bfb6af203.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.335002 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b36d7c_c277_4c60_9408_3a2bf41cfc7d.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b36d7c_c277_4c60_9408_3a2bf41cfc7d.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.353220 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22e391b4_ed5e_4fb3_828e_9b9f06d55b6b.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22e391b4_ed5e_4fb3_828e_9b9f06d55b6b.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.353954 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode722cdee_d18c_4096_9f5e_15ead9a799aa.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode722cdee_d18c_4096_9f5e_15ead9a799aa.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.353990 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d02a681_5355_4fa0_9160_b01649cef2e3.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d02a681_5355_4fa0_9160_b01649cef2e3.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.354012 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1678095e_0a1d_4199_90c6_ea3afc879e0b.slice/crio-conmon-1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1678095e_0a1d_4199_90c6_ea3afc879e0b.slice/crio-conmon-1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d.scope: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.354027 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1678095e_0a1d_4199_90c6_ea3afc879e0b.slice/crio-1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1678095e_0a1d_4199_90c6_ea3afc879e0b.slice/crio-1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d.scope: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: W0126 14:30:11.365004 4922 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15299c8c_add1_43d2_8a31_20e930c41508.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15299c8c_add1_43d2_8a31_20e930c41508.slice: no such file or directory Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.536223 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.642083 4922 generic.go:334] "Generic (PLEG): container finished" podID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerID="571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057" exitCode=0 Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.642539 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.642755 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" event={"ID":"9d9d5644-40b7-49a4-9d01-6158dcb79c3d","Type":"ContainerDied","Data":"571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057"} Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.642784 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4f48c7-pfcfp" event={"ID":"9d9d5644-40b7-49a4-9d01-6158dcb79c3d","Type":"ContainerDied","Data":"9ffd81f1537f366df0bb7e7e055c9f35af41a07bc5a64c2b3dfe0e3771608e53"} Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.642828 4922 scope.go:117] "RemoveContainer" containerID="571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.656394 4922 generic.go:334] "Generic (PLEG): container finished" podID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerID="c2474915dcbbbd3a768fade598d9d0b2e8243b6212b803776364f82a9316297c" exitCode=137 Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.656464 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b4f749b44-2qdw7" event={"ID":"9bccd630-51ec-481b-97c6-1f2757dfc685","Type":"ContainerDied","Data":"c2474915dcbbbd3a768fade598d9d0b2e8243b6212b803776364f82a9316297c"} Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.661136 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00524d24-79f3-444a-b95b-1cb294892c78","Type":"ContainerStarted","Data":"17265ebaadc079245d56399e4c3412ded4e8a7eddb5ca58e74c8f7510fdbee74"} Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.666025 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9456dfb4-60f4-440a-b11c-aef57ca86762","Type":"ContainerStarted","Data":"03f709883bc26d27f1b99c18f71fb134eede5b3250d5c40c2a30ad89e1ef32e1"} Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.666065 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9456dfb4-60f4-440a-b11c-aef57ca86762","Type":"ContainerStarted","Data":"7ead366d7148ff455431478f21330b3c5a700489056ba63ed35d295873509b89"} Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.672720 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-nb\") pod \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.672885 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-dns-svc\") pod \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.672946 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdfm\" (UniqueName: \"kubernetes.io/projected/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-kube-api-access-6zdfm\") pod \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.673011 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-config\") pod \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.673035 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-sb\") pod \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\" (UID: \"9d9d5644-40b7-49a4-9d01-6158dcb79c3d\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.736821 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.736798341 podStartE2EDuration="3.736798341s" podCreationTimestamp="2026-01-26 14:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:11.695620332 +0000 UTC m=+1228.897883094" watchObservedRunningTime="2026-01-26 14:30:11.736798341 +0000 UTC m=+1228.939061113" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.761191 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-kube-api-access-6zdfm" (OuterVolumeSpecName: "kube-api-access-6zdfm") pod "9d9d5644-40b7-49a4-9d01-6158dcb79c3d" (UID: "9d9d5644-40b7-49a4-9d01-6158dcb79c3d"). InnerVolumeSpecName "kube-api-access-6zdfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.784467 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdfm\" (UniqueName: \"kubernetes.io/projected/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-kube-api-access-6zdfm\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.791809 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d9d5644-40b7-49a4-9d01-6158dcb79c3d" (UID: "9d9d5644-40b7-49a4-9d01-6158dcb79c3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.809753 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-config" (OuterVolumeSpecName: "config") pod "9d9d5644-40b7-49a4-9d01-6158dcb79c3d" (UID: "9d9d5644-40b7-49a4-9d01-6158dcb79c3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.862300 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d9d5644-40b7-49a4-9d01-6158dcb79c3d" (UID: "9d9d5644-40b7-49a4-9d01-6158dcb79c3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.865315 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d9d5644-40b7-49a4-9d01-6158dcb79c3d" (UID: "9d9d5644-40b7-49a4-9d01-6158dcb79c3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.890832 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.890866 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.890879 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.890892 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d9d5644-40b7-49a4-9d01-6158dcb79c3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.899389 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.991773 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-config-data\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.991946 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-combined-ca-bundle\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.991969 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-tls-certs\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.992015 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhd4h\" (UniqueName: \"kubernetes.io/projected/9bccd630-51ec-481b-97c6-1f2757dfc685-kube-api-access-vhd4h\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.992036 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-secret-key\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.992085 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bccd630-51ec-481b-97c6-1f2757dfc685-logs\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.992164 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-scripts\") pod \"9bccd630-51ec-481b-97c6-1f2757dfc685\" (UID: \"9bccd630-51ec-481b-97c6-1f2757dfc685\") " Jan 26 14:30:11 crc kubenswrapper[4922]: I0126 14:30:11.997383 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bccd630-51ec-481b-97c6-1f2757dfc685-logs" (OuterVolumeSpecName: "logs") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.006986 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bccd630-51ec-481b-97c6-1f2757dfc685-kube-api-access-vhd4h" (OuterVolumeSpecName: "kube-api-access-vhd4h") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "kube-api-access-vhd4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.013150 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.040731 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4f48c7-pfcfp"] Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.046666 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-config-data" (OuterVolumeSpecName: "config-data") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.059637 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-scripts" (OuterVolumeSpecName: "scripts") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.073493 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4f48c7-pfcfp"] Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.077273 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.081203 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9bccd630-51ec-481b-97c6-1f2757dfc685" (UID: "9bccd630-51ec-481b-97c6-1f2757dfc685"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095521 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095551 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095563 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhd4h\" (UniqueName: \"kubernetes.io/projected/9bccd630-51ec-481b-97c6-1f2757dfc685-kube-api-access-vhd4h\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095574 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bccd630-51ec-481b-97c6-1f2757dfc685-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095583 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bccd630-51ec-481b-97c6-1f2757dfc685-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095592 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.095602 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bccd630-51ec-481b-97c6-1f2757dfc685-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.123307 4922 scope.go:117] "RemoveContainer" containerID="fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.214932 4922 scope.go:117] "RemoveContainer" containerID="571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057" Jan 26 14:30:12 crc kubenswrapper[4922]: E0126 14:30:12.215888 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057\": container with ID starting with 571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057 not found: ID does not exist" containerID="571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.215928 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057"} err="failed to get container status \"571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057\": rpc error: code = NotFound desc = could not find container \"571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057\": container with ID starting with 571790120fdbc132691eb2888969e9dd5a167a844fbd7acf7d90a5c0563bf057 not found: ID does not exist" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.215959 4922 scope.go:117] "RemoveContainer" containerID="fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe" Jan 26 14:30:12 crc kubenswrapper[4922]: E0126 14:30:12.216252 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe\": container with ID starting with fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe not found: ID does not exist" containerID="fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.216277 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe"} err="failed to get container status \"fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe\": rpc error: code = NotFound desc = could not find container \"fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe\": container with ID starting with fd72073bae62f9f57c278a13ea710ff1bf42328166608d68f230cac056efd1fe not found: ID does not exist" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.363919 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.681386 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9456dfb4-60f4-440a-b11c-aef57ca86762","Type":"ContainerStarted","Data":"4fece7c627ae1a1b7f4bf4121e2f1a685307ab5a3b933f00e45df1ddf42a2400"} Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.692728 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b4f749b44-2qdw7" event={"ID":"9bccd630-51ec-481b-97c6-1f2757dfc685","Type":"ContainerDied","Data":"82f9ca8c7e66e8fa845c0ad5b02a0e7614a68d93238f9e95d8efcaae5b2ca71e"} Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.692787 4922 scope.go:117] "RemoveContainer" containerID="0142e706f07caaac11da2ed37cb72515ef4a009fdcfce539c86464a124266490" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.692876 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b4f749b44-2qdw7" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.713175 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.7131562259999997 podStartE2EDuration="3.713156226s" podCreationTimestamp="2026-01-26 14:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:12.705264664 +0000 UTC m=+1229.907527436" watchObservedRunningTime="2026-01-26 14:30:12.713156226 +0000 UTC m=+1229.915418998" Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.733761 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b4f749b44-2qdw7"] Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.740943 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b4f749b44-2qdw7"] Jan 26 14:30:12 crc kubenswrapper[4922]: I0126 14:30:12.926676 4922 scope.go:117] "RemoveContainer" containerID="c2474915dcbbbd3a768fade598d9d0b2e8243b6212b803776364f82a9316297c" Jan 26 14:30:13 crc kubenswrapper[4922]: I0126 14:30:13.114719 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" path="/var/lib/kubelet/pods/9bccd630-51ec-481b-97c6-1f2757dfc685/volumes" Jan 26 14:30:13 crc kubenswrapper[4922]: I0126 14:30:13.115301 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" path="/var/lib/kubelet/pods/9d9d5644-40b7-49a4-9d01-6158dcb79c3d/volumes" Jan 26 14:30:13 crc kubenswrapper[4922]: I0126 14:30:13.554707 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:13 crc kubenswrapper[4922]: I0126 14:30:13.554762 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:13 crc kubenswrapper[4922]: I0126 14:30:13.555618 4922 scope.go:117] "RemoveContainer" containerID="1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d" Jan 26 14:30:13 crc kubenswrapper[4922]: E0126 14:30:13.556260 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(1678095e-0a1d-4199-90c6-ea3afc879e0b)\"" pod="openstack/watcher-decision-engine-0" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.077112 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-69b95496c5-qvg59"] Jan 26 14:30:14 crc kubenswrapper[4922]: E0126 14:30:14.077687 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerName="init" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.078057 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerName="init" Jan 26 14:30:14 crc kubenswrapper[4922]: E0126 14:30:14.078126 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.078136 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" Jan 26 14:30:14 crc kubenswrapper[4922]: E0126 14:30:14.079923 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon-log" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.079945 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon-log" Jan 26 14:30:14 crc kubenswrapper[4922]: E0126 14:30:14.079984 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerName="dnsmasq-dns" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.079995 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerName="dnsmasq-dns" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.080285 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9d5644-40b7-49a4-9d01-6158dcb79c3d" containerName="dnsmasq-dns" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.080320 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.080349 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bccd630-51ec-481b-97c6-1f2757dfc685" containerName="horizon-log" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.082722 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.086603 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.086857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.086972 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.110436 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-69b95496c5-qvg59"] Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4h2\" (UniqueName: \"kubernetes.io/projected/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-kube-api-access-qq4h2\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134704 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-config-data\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134739 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-etc-swift\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134816 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-combined-ca-bundle\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134835 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-public-tls-certs\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134856 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-run-httpd\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134870 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-log-httpd\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.134924 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-internal-tls-certs\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.237817 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-etc-swift\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.237972 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-combined-ca-bundle\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.238005 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-public-tls-certs\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.238030 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-run-httpd\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.238049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-log-httpd\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.238165 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-internal-tls-certs\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.238206 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq4h2\" (UniqueName: \"kubernetes.io/projected/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-kube-api-access-qq4h2\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.238256 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-config-data\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.245757 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-log-httpd\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.246987 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-run-httpd\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.252621 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-public-tls-certs\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.255268 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-internal-tls-certs\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.255652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-config-data\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.267306 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-combined-ca-bundle\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.272571 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-etc-swift\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.273233 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq4h2\" (UniqueName: \"kubernetes.io/projected/a2bcb723-e3e3-41f8-9704-10a1f8e78bd7-kube-api-access-qq4h2\") pod \"swift-proxy-69b95496c5-qvg59\" (UID: \"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7\") " pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.401370 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:14 crc kubenswrapper[4922]: I0126 14:30:14.988205 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-69b95496c5-qvg59"] Jan 26 14:30:15 crc kubenswrapper[4922]: W0126 14:30:15.000479 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2bcb723_e3e3_41f8_9704_10a1f8e78bd7.slice/crio-73026c9ed091a900ce9a5555e9c4397be51631eaad1918f325aff6fad627f890 WatchSource:0}: Error finding container 73026c9ed091a900ce9a5555e9c4397be51631eaad1918f325aff6fad627f890: Status 404 returned error can't find the container with id 73026c9ed091a900ce9a5555e9c4397be51631eaad1918f325aff6fad627f890 Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.079608 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.745002 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69b95496c5-qvg59" event={"ID":"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7","Type":"ContainerStarted","Data":"b48c68e6069da8992280322ceec5084f09c899c2aca928aa33c0b40846c3ce78"} Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.745668 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69b95496c5-qvg59" event={"ID":"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7","Type":"ContainerStarted","Data":"122a54207580234a7c09d3b153c152fe7c1691f36cd387cac3db768405550e91"} Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.745686 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-69b95496c5-qvg59" event={"ID":"a2bcb723-e3e3-41f8-9704-10a1f8e78bd7","Type":"ContainerStarted","Data":"73026c9ed091a900ce9a5555e9c4397be51631eaad1918f325aff6fad627f890"} Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.745770 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.745800 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:15 crc kubenswrapper[4922]: I0126 14:30:15.767414 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-69b95496c5-qvg59" podStartSLOduration=1.767393689 podStartE2EDuration="1.767393689s" podCreationTimestamp="2026-01-26 14:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:15.762021135 +0000 UTC m=+1232.964283927" watchObservedRunningTime="2026-01-26 14:30:15.767393689 +0000 UTC m=+1232.969656471" Jan 26 14:30:16 crc kubenswrapper[4922]: I0126 14:30:16.792167 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 14:30:16 crc kubenswrapper[4922]: I0126 14:30:16.792258 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 14:30:16 crc kubenswrapper[4922]: I0126 14:30:16.835449 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 14:30:16 crc kubenswrapper[4922]: I0126 14:30:16.849123 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 14:30:16 crc kubenswrapper[4922]: I0126 14:30:16.864569 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.769833 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.769874 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.988015 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.988412 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="proxy-httpd" containerID="cri-o://35369f0eb29926fa78b9e79c13da3ae421e5178af2b296a875ec4efa64f9e6be" gracePeriod=30 Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.988434 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="sg-core" containerID="cri-o://02ad6f1e470e9ba0ddd7cff64a43f0aa82d74e584c7ad392e59de485356f11eb" gracePeriod=30 Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.988481 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-notification-agent" containerID="cri-o://2634ba48a15e7efc7cd0486fa403f3f5eabe101dd2668fd5e3a4201bd352d1cc" gracePeriod=30 Jan 26 14:30:17 crc kubenswrapper[4922]: I0126 14:30:17.999725 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-central-agent" containerID="cri-o://c41f9da2244b62a7af0e87092700061ee879552a2c776f56525485277d89e7eb" gracePeriod=30 Jan 26 14:30:18 crc kubenswrapper[4922]: I0126 14:30:18.820585 4922 generic.go:334] "Generic (PLEG): container finished" podID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerID="35369f0eb29926fa78b9e79c13da3ae421e5178af2b296a875ec4efa64f9e6be" exitCode=0 Jan 26 14:30:18 crc kubenswrapper[4922]: I0126 14:30:18.821049 4922 generic.go:334] "Generic (PLEG): container finished" podID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerID="02ad6f1e470e9ba0ddd7cff64a43f0aa82d74e584c7ad392e59de485356f11eb" exitCode=2 Jan 26 14:30:18 crc kubenswrapper[4922]: I0126 14:30:18.821142 4922 generic.go:334] "Generic (PLEG): container finished" podID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerID="c41f9da2244b62a7af0e87092700061ee879552a2c776f56525485277d89e7eb" exitCode=0 Jan 26 14:30:18 crc kubenswrapper[4922]: I0126 14:30:18.821027 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerDied","Data":"35369f0eb29926fa78b9e79c13da3ae421e5178af2b296a875ec4efa64f9e6be"} Jan 26 14:30:18 crc kubenswrapper[4922]: I0126 14:30:18.821560 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerDied","Data":"02ad6f1e470e9ba0ddd7cff64a43f0aa82d74e584c7ad392e59de485356f11eb"} Jan 26 14:30:18 crc kubenswrapper[4922]: I0126 14:30:18.821650 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerDied","Data":"c41f9da2244b62a7af0e87092700061ee879552a2c776f56525485277d89e7eb"} Jan 26 14:30:19 crc kubenswrapper[4922]: I0126 14:30:19.107536 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:19 crc kubenswrapper[4922]: I0126 14:30:19.107574 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:19 crc kubenswrapper[4922]: I0126 14:30:19.134653 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:19 crc kubenswrapper[4922]: I0126 14:30:19.184389 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:19 crc kubenswrapper[4922]: I0126 14:30:19.834269 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:19 crc kubenswrapper[4922]: I0126 14:30:19.834389 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.157590 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p7wdz"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.159787 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.188004 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p7wdz"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.263952 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2q8\" (UniqueName: \"kubernetes.io/projected/33a20bca-2b21-4f25-a853-04533951ab18-kube-api-access-6n2q8\") pod \"nova-api-db-create-p7wdz\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.264054 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20bca-2b21-4f25-a853-04533951ab18-operator-scripts\") pod \"nova-api-db-create-p7wdz\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.265620 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cv9zm"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.267938 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.289394 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cv9zm"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.366192 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20bca-2b21-4f25-a853-04533951ab18-operator-scripts\") pod \"nova-api-db-create-p7wdz\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.366291 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gcsh\" (UniqueName: \"kubernetes.io/projected/87e9f925-c521-47fb-bc44-6243504b38ad-kube-api-access-7gcsh\") pod \"nova-cell0-db-create-cv9zm\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.366341 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87e9f925-c521-47fb-bc44-6243504b38ad-operator-scripts\") pod \"nova-cell0-db-create-cv9zm\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.366389 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2q8\" (UniqueName: \"kubernetes.io/projected/33a20bca-2b21-4f25-a853-04533951ab18-kube-api-access-6n2q8\") pod \"nova-api-db-create-p7wdz\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.370889 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20bca-2b21-4f25-a853-04533951ab18-operator-scripts\") pod \"nova-api-db-create-p7wdz\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.383274 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1213-account-create-update-x4gt5"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.385148 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.405852 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.413732 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.446122 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1213-account-create-update-x4gt5"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.491812 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2q8\" (UniqueName: \"kubernetes.io/projected/33a20bca-2b21-4f25-a853-04533951ab18-kube-api-access-6n2q8\") pod \"nova-api-db-create-p7wdz\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.492405 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.493214 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee24504-84ec-4bd8-b29e-797fff4db145-operator-scripts\") pod \"nova-api-1213-account-create-update-x4gt5\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.493280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gcsh\" (UniqueName: \"kubernetes.io/projected/87e9f925-c521-47fb-bc44-6243504b38ad-kube-api-access-7gcsh\") pod \"nova-cell0-db-create-cv9zm\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.493323 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87e9f925-c521-47fb-bc44-6243504b38ad-operator-scripts\") pod \"nova-cell0-db-create-cv9zm\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.493401 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjt5w\" (UniqueName: \"kubernetes.io/projected/4ee24504-84ec-4bd8-b29e-797fff4db145-kube-api-access-vjt5w\") pod \"nova-api-1213-account-create-update-x4gt5\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.495687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87e9f925-c521-47fb-bc44-6243504b38ad-operator-scripts\") pod \"nova-cell0-db-create-cv9zm\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.526202 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.526291 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.530850 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gcsh\" (UniqueName: \"kubernetes.io/projected/87e9f925-c521-47fb-bc44-6243504b38ad-kube-api-access-7gcsh\") pod \"nova-cell0-db-create-cv9zm\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.567152 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rh9fp"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.572293 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.588167 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rh9fp"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.595276 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjt5w\" (UniqueName: \"kubernetes.io/projected/4ee24504-84ec-4bd8-b29e-797fff4db145-kube-api-access-vjt5w\") pod \"nova-api-1213-account-create-update-x4gt5\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.595390 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee24504-84ec-4bd8-b29e-797fff4db145-operator-scripts\") pod \"nova-api-1213-account-create-update-x4gt5\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.597458 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee24504-84ec-4bd8-b29e-797fff4db145-operator-scripts\") pod \"nova-api-1213-account-create-update-x4gt5\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.607694 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.652130 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjt5w\" (UniqueName: \"kubernetes.io/projected/4ee24504-84ec-4bd8-b29e-797fff4db145-kube-api-access-vjt5w\") pod \"nova-api-1213-account-create-update-x4gt5\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.701174 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9807-account-create-update-gjf78"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.702434 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.702844 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzgk\" (UniqueName: \"kubernetes.io/projected/85e52d2e-62ff-40ba-9e10-a970927a8e47-kube-api-access-xwzgk\") pod \"nova-cell1-db-create-rh9fp\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.702965 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e52d2e-62ff-40ba-9e10-a970927a8e47-operator-scripts\") pod \"nova-cell1-db-create-rh9fp\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.710563 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.766413 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9807-account-create-update-gjf78"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.799846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.807946 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzgk\" (UniqueName: \"kubernetes.io/projected/85e52d2e-62ff-40ba-9e10-a970927a8e47-kube-api-access-xwzgk\") pod \"nova-cell1-db-create-rh9fp\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.808077 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjm2\" (UniqueName: \"kubernetes.io/projected/292489b8-e052-41ab-9648-a2113a58ca1b-kube-api-access-xcjm2\") pod \"nova-cell0-9807-account-create-update-gjf78\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.808133 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292489b8-e052-41ab-9648-a2113a58ca1b-operator-scripts\") pod \"nova-cell0-9807-account-create-update-gjf78\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.808210 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e52d2e-62ff-40ba-9e10-a970927a8e47-operator-scripts\") pod \"nova-cell1-db-create-rh9fp\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.809102 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e52d2e-62ff-40ba-9e10-a970927a8e47-operator-scripts\") pod \"nova-cell1-db-create-rh9fp\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.815941 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1cf0-account-create-update-mccq6"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.817251 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.825512 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.830823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzgk\" (UniqueName: \"kubernetes.io/projected/85e52d2e-62ff-40ba-9e10-a970927a8e47-kube-api-access-xwzgk\") pod \"nova-cell1-db-create-rh9fp\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.907698 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1cf0-account-create-update-mccq6"] Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.915666 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.925723 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-operator-scripts\") pod \"nova-cell1-1cf0-account-create-update-mccq6\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.925913 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjm2\" (UniqueName: \"kubernetes.io/projected/292489b8-e052-41ab-9648-a2113a58ca1b-kube-api-access-xcjm2\") pod \"nova-cell0-9807-account-create-update-gjf78\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.925955 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4kg\" (UniqueName: \"kubernetes.io/projected/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-kube-api-access-nt4kg\") pod \"nova-cell1-1cf0-account-create-update-mccq6\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.926000 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292489b8-e052-41ab-9648-a2113a58ca1b-operator-scripts\") pod \"nova-cell0-9807-account-create-update-gjf78\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.926896 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.927125 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292489b8-e052-41ab-9648-a2113a58ca1b-operator-scripts\") pod \"nova-cell0-9807-account-create-update-gjf78\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:20 crc kubenswrapper[4922]: I0126 14:30:20.950732 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjm2\" (UniqueName: \"kubernetes.io/projected/292489b8-e052-41ab-9648-a2113a58ca1b-kube-api-access-xcjm2\") pod \"nova-cell0-9807-account-create-update-gjf78\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.028706 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4kg\" (UniqueName: \"kubernetes.io/projected/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-kube-api-access-nt4kg\") pod \"nova-cell1-1cf0-account-create-update-mccq6\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.029035 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-operator-scripts\") pod \"nova-cell1-1cf0-account-create-update-mccq6\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.035270 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-operator-scripts\") pod \"nova-cell1-1cf0-account-create-update-mccq6\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.047185 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4kg\" (UniqueName: \"kubernetes.io/projected/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-kube-api-access-nt4kg\") pod \"nova-cell1-1cf0-account-create-update-mccq6\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.054756 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.312885 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.868538 4922 generic.go:334] "Generic (PLEG): container finished" podID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerID="2634ba48a15e7efc7cd0486fa403f3f5eabe101dd2668fd5e3a4201bd352d1cc" exitCode=0 Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.868743 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerDied","Data":"2634ba48a15e7efc7cd0486fa403f3f5eabe101dd2668fd5e3a4201bd352d1cc"} Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.868892 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:30:21 crc kubenswrapper[4922]: I0126 14:30:21.868902 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:30:22 crc kubenswrapper[4922]: I0126 14:30:22.547055 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:22 crc kubenswrapper[4922]: I0126 14:30:22.644235 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:23 crc kubenswrapper[4922]: I0126 14:30:23.201439 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:30:23 crc kubenswrapper[4922]: I0126 14:30:23.202566 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" containerName="kube-state-metrics" containerID="cri-o://769b53b07a50207a25307b7f5203882f67548fea70fc64521a6748241808602a" gracePeriod=30 Jan 26 14:30:23 crc kubenswrapper[4922]: I0126 14:30:23.889350 4922 generic.go:334] "Generic (PLEG): container finished" podID="ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" containerID="769b53b07a50207a25307b7f5203882f67548fea70fc64521a6748241808602a" exitCode=2 Jan 26 14:30:23 crc kubenswrapper[4922]: I0126 14:30:23.889442 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd","Type":"ContainerDied","Data":"769b53b07a50207a25307b7f5203882f67548fea70fc64521a6748241808602a"} Jan 26 14:30:24 crc kubenswrapper[4922]: I0126 14:30:24.092371 4922 scope.go:117] "RemoveContainer" containerID="1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d" Jan 26 14:30:24 crc kubenswrapper[4922]: E0126 14:30:24.092624 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(1678095e-0a1d-4199-90c6-ea3afc879e0b)\"" pod="openstack/watcher-decision-engine-0" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" Jan 26 14:30:24 crc kubenswrapper[4922]: I0126 14:30:24.412940 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:24 crc kubenswrapper[4922]: I0126 14:30:24.417369 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-69b95496c5-qvg59" Jan 26 14:30:25 crc kubenswrapper[4922]: I0126 14:30:25.283274 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.673780 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.715938 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772667 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-config-data\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772730 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-combined-ca-bundle\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772768 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-log-httpd\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-sg-core-conf-yaml\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772842 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d2ks\" (UniqueName: \"kubernetes.io/projected/a7d63894-e178-44b9-9b6a-93b98cb78b8a-kube-api-access-8d2ks\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmvq4\" (UniqueName: \"kubernetes.io/projected/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd-kube-api-access-lmvq4\") pod \"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd\" (UID: \"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772905 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-run-httpd\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.772969 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-scripts\") pod \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\" (UID: \"a7d63894-e178-44b9-9b6a-93b98cb78b8a\") " Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.773824 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.774558 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.779443 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-scripts" (OuterVolumeSpecName: "scripts") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.782608 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd-kube-api-access-lmvq4" (OuterVolumeSpecName: "kube-api-access-lmvq4") pod "ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" (UID: "ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd"). InnerVolumeSpecName "kube-api-access-lmvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.794408 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d63894-e178-44b9-9b6a-93b98cb78b8a-kube-api-access-8d2ks" (OuterVolumeSpecName: "kube-api-access-8d2ks") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "kube-api-access-8d2ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.808214 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.859308 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877902 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877933 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877942 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877954 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d2ks\" (UniqueName: \"kubernetes.io/projected/a7d63894-e178-44b9-9b6a-93b98cb78b8a-kube-api-access-8d2ks\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877966 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmvq4\" (UniqueName: \"kubernetes.io/projected/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd-kube-api-access-lmvq4\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877974 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7d63894-e178-44b9-9b6a-93b98cb78b8a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.877982 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.908282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-config-data" (OuterVolumeSpecName: "config-data") pod "a7d63894-e178-44b9-9b6a-93b98cb78b8a" (UID: "a7d63894-e178-44b9-9b6a-93b98cb78b8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.917719 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd","Type":"ContainerDied","Data":"1c8df4bd8a3b6178e44d0529145aeb7be9c6660f766ed60ee2392f89636271f5"} Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.917761 4922 scope.go:117] "RemoveContainer" containerID="769b53b07a50207a25307b7f5203882f67548fea70fc64521a6748241808602a" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.917860 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.923873 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c962db74-b70e-44df-a3d2-8a2dda688ca8","Type":"ContainerStarted","Data":"5f4a329accae5e533f99380227ceb2e19d8223c4bc208b8655afc499aad0a07a"} Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.935743 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7d63894-e178-44b9-9b6a-93b98cb78b8a","Type":"ContainerDied","Data":"ffe1f4b9722c577561d773a8ecb797c6769fac582cecb6a10214e1c91428b236"} Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.935828 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.953800 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.027236201 podStartE2EDuration="18.953781001s" podCreationTimestamp="2026-01-26 14:30:08 +0000 UTC" firstStartedPulling="2026-01-26 14:30:09.381242337 +0000 UTC m=+1226.583505109" lastFinishedPulling="2026-01-26 14:30:26.307787137 +0000 UTC m=+1243.510049909" observedRunningTime="2026-01-26 14:30:26.940375191 +0000 UTC m=+1244.142637983" watchObservedRunningTime="2026-01-26 14:30:26.953781001 +0000 UTC m=+1244.156043773" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.971932 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1213-account-create-update-x4gt5"] Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.983333 4922 scope.go:117] "RemoveContainer" containerID="35369f0eb29926fa78b9e79c13da3ae421e5178af2b296a875ec4efa64f9e6be" Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.991146 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:30:26 crc kubenswrapper[4922]: I0126 14:30:26.993860 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d63894-e178-44b9-9b6a-93b98cb78b8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.021410 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.035291 4922 scope.go:117] "RemoveContainer" containerID="02ad6f1e470e9ba0ddd7cff64a43f0aa82d74e584c7ad392e59de485356f11eb" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.043330 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.061106 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.072432 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rh9fp"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.090798 4922 scope.go:117] "RemoveContainer" containerID="2634ba48a15e7efc7cd0486fa403f3f5eabe101dd2668fd5e3a4201bd352d1cc" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.138652 4922 scope.go:117] "RemoveContainer" containerID="c41f9da2244b62a7af0e87092700061ee879552a2c776f56525485277d89e7eb" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.163326 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" path="/var/lib/kubelet/pods/a7d63894-e178-44b9-9b6a-93b98cb78b8a/volumes" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.164194 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" path="/var/lib/kubelet/pods/ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd/volumes" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.164764 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: E0126 14:30:27.178957 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" containerName="kube-state-metrics" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.178996 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" containerName="kube-state-metrics" Jan 26 14:30:27 crc kubenswrapper[4922]: E0126 14:30:27.179086 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="sg-core" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179099 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="sg-core" Jan 26 14:30:27 crc kubenswrapper[4922]: E0126 14:30:27.179134 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-notification-agent" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179143 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-notification-agent" Jan 26 14:30:27 crc kubenswrapper[4922]: E0126 14:30:27.179155 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-central-agent" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179162 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-central-agent" Jan 26 14:30:27 crc kubenswrapper[4922]: E0126 14:30:27.179173 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="proxy-httpd" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179180 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="proxy-httpd" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179590 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="proxy-httpd" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179619 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef85b6d3-6e1d-4a96-9a93-19ab9618c3cd" containerName="kube-state-metrics" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179632 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-notification-agent" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179645 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="sg-core" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.179659 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d63894-e178-44b9-9b6a-93b98cb78b8a" containerName="ceilometer-central-agent" Jan 26 14:30:27 crc kubenswrapper[4922]: W0126 14:30:27.181485 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b04c51_33d8_4c83_9d5e_e11f2e4cf035.slice/crio-f1e30df9fb45760d51638a327054ceca7d8ac12bc44a086a4b410d50b76aeced WatchSource:0}: Error finding container f1e30df9fb45760d51638a327054ceca7d8ac12bc44a086a4b410d50b76aeced: Status 404 returned error can't find the container with id f1e30df9fb45760d51638a327054ceca7d8ac12bc44a086a4b410d50b76aeced Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.182978 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.183016 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.183876 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.185727 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.187855 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.187915 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-97fj2" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.194090 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.194539 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.196252 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.196480 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.200490 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.218475 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p7wdz"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.232105 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9807-account-create-update-gjf78"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.238480 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cv9zm"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.259122 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1cf0-account-create-update-mccq6"] Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304539 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304633 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-config-data\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304666 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304737 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304871 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304903 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-run-httpd\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.304984 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-log-httpd\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.305028 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhc22\" (UniqueName: \"kubernetes.io/projected/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-api-access-vhc22\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.305051 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwqp\" (UniqueName: \"kubernetes.io/projected/04d5ab2a-c747-4f88-9762-9371202cfa28-kube-api-access-9qwqp\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.305083 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-scripts\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407024 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-run-httpd\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407087 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-log-httpd\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407129 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhc22\" (UniqueName: \"kubernetes.io/projected/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-api-access-vhc22\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407148 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qwqp\" (UniqueName: \"kubernetes.io/projected/04d5ab2a-c747-4f88-9762-9371202cfa28-kube-api-access-9qwqp\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-scripts\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407202 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407245 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-config-data\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407267 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407292 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407320 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407362 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.407532 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-log-httpd\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.409083 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-run-httpd\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.422827 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.424320 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.433361 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.433803 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.434465 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-config-data\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.434940 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.436020 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131b28f9-a3ee-401d-a4e0-f66ec118f156-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.436875 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhc22\" (UniqueName: \"kubernetes.io/projected/131b28f9-a3ee-401d-a4e0-f66ec118f156-kube-api-access-vhc22\") pod \"kube-state-metrics-0\" (UID: \"131b28f9-a3ee-401d-a4e0-f66ec118f156\") " pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.437577 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-scripts\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.440960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qwqp\" (UniqueName: \"kubernetes.io/projected/04d5ab2a-c747-4f88-9762-9371202cfa28-kube-api-access-9qwqp\") pod \"ceilometer-0\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.525167 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.541079 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.948847 4922 generic.go:334] "Generic (PLEG): container finished" podID="292489b8-e052-41ab-9648-a2113a58ca1b" containerID="01345469de7665cf96a4b1f4e95a3e8b299448eb989fd623ccd8a3b98f138974" exitCode=0 Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.949555 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9807-account-create-update-gjf78" event={"ID":"292489b8-e052-41ab-9648-a2113a58ca1b","Type":"ContainerDied","Data":"01345469de7665cf96a4b1f4e95a3e8b299448eb989fd623ccd8a3b98f138974"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.949578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9807-account-create-update-gjf78" event={"ID":"292489b8-e052-41ab-9648-a2113a58ca1b","Type":"ContainerStarted","Data":"24a3a70a4f095bf9ec793d58c252c2b8aa797283bfcc55a7f54da91c5d5afb33"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.951504 4922 generic.go:334] "Generic (PLEG): container finished" podID="87e9f925-c521-47fb-bc44-6243504b38ad" containerID="f16afd1356c41b87183dae0d377a616972f8e970c52d1000d825459d4d66caa5" exitCode=0 Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.951573 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cv9zm" event={"ID":"87e9f925-c521-47fb-bc44-6243504b38ad","Type":"ContainerDied","Data":"f16afd1356c41b87183dae0d377a616972f8e970c52d1000d825459d4d66caa5"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.951597 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cv9zm" event={"ID":"87e9f925-c521-47fb-bc44-6243504b38ad","Type":"ContainerStarted","Data":"866ec6627a4f82efe7251f9ec6eafafbe1f3fb4abbc7994a5e17fa2088de7c21"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.953386 4922 generic.go:334] "Generic (PLEG): container finished" podID="4ee24504-84ec-4bd8-b29e-797fff4db145" containerID="d6c2714360348fd442014ee38105d8cedb6d774c460a0c7a07590082aa781d7b" exitCode=0 Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.953444 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1213-account-create-update-x4gt5" event={"ID":"4ee24504-84ec-4bd8-b29e-797fff4db145","Type":"ContainerDied","Data":"d6c2714360348fd442014ee38105d8cedb6d774c460a0c7a07590082aa781d7b"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.953460 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1213-account-create-update-x4gt5" event={"ID":"4ee24504-84ec-4bd8-b29e-797fff4db145","Type":"ContainerStarted","Data":"7397af93e1aac2d33eaa8a3d8520d29c73c898fa50c0c429a5850b56ba7fd551"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.968212 4922 generic.go:334] "Generic (PLEG): container finished" podID="c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" containerID="53c76d61f12f30462f88f9dfaeefda3545994c152b148aaa6ddbd414f699b1ef" exitCode=0 Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.968284 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" event={"ID":"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035","Type":"ContainerDied","Data":"53c76d61f12f30462f88f9dfaeefda3545994c152b148aaa6ddbd414f699b1ef"} Jan 26 14:30:27 crc kubenswrapper[4922]: I0126 14:30:27.968307 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" event={"ID":"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035","Type":"ContainerStarted","Data":"f1e30df9fb45760d51638a327054ceca7d8ac12bc44a086a4b410d50b76aeced"} Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.012093 4922 generic.go:334] "Generic (PLEG): container finished" podID="33a20bca-2b21-4f25-a853-04533951ab18" containerID="d41b3d786f11a53f74fef6004b464538dc31896a553f0972f66e4ee33a477bd4" exitCode=0 Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.012257 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p7wdz" event={"ID":"33a20bca-2b21-4f25-a853-04533951ab18","Type":"ContainerDied","Data":"d41b3d786f11a53f74fef6004b464538dc31896a553f0972f66e4ee33a477bd4"} Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.012294 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p7wdz" event={"ID":"33a20bca-2b21-4f25-a853-04533951ab18","Type":"ContainerStarted","Data":"92ef41f203f893f64b820fbdb7c8cd6adb56374f5d9a8e6789462c4394f84107"} Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.015960 4922 generic.go:334] "Generic (PLEG): container finished" podID="85e52d2e-62ff-40ba-9e10-a970927a8e47" containerID="30664a56011979da034c73622a8ad6629aeef01b41586ba4a5aaf1a5ccb6c6d0" exitCode=0 Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.016459 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rh9fp" event={"ID":"85e52d2e-62ff-40ba-9e10-a970927a8e47","Type":"ContainerDied","Data":"30664a56011979da034c73622a8ad6629aeef01b41586ba4a5aaf1a5ccb6c6d0"} Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.016495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rh9fp" event={"ID":"85e52d2e-62ff-40ba-9e10-a970927a8e47","Type":"ContainerStarted","Data":"89d3edca82aa03168eb8b4bd28d20f06f3b80daeb8f01fccd085232928adad52"} Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.087939 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.105072 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:28 crc kubenswrapper[4922]: I0126 14:30:28.106810 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.024611 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerStarted","Data":"a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d"} Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.025256 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerStarted","Data":"5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e"} Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.025272 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerStarted","Data":"5e3200c2de60879b12a5cacc68573c67e29b4908e61be18d64ab5905b4ba484e"} Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.034719 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"131b28f9-a3ee-401d-a4e0-f66ec118f156","Type":"ContainerStarted","Data":"cff87a890096dd9ec9039e746f2688e3f1db9952fb4e13105a4d9e152fd0a2fc"} Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.034762 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"131b28f9-a3ee-401d-a4e0-f66ec118f156","Type":"ContainerStarted","Data":"3470008569f9a1fb42fedd71b669d6977826b13c32fd80fe2d70908fd465e2eb"} Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.035219 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.075832 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.6987432780000002 podStartE2EDuration="3.075813874s" podCreationTimestamp="2026-01-26 14:30:26 +0000 UTC" firstStartedPulling="2026-01-26 14:30:28.106608438 +0000 UTC m=+1245.308871210" lastFinishedPulling="2026-01-26 14:30:28.483679034 +0000 UTC m=+1245.685941806" observedRunningTime="2026-01-26 14:30:29.068899579 +0000 UTC m=+1246.271162351" watchObservedRunningTime="2026-01-26 14:30:29.075813874 +0000 UTC m=+1246.278076646" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.327466 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75cdcc7857-fs8tr" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.412928 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d5bfcf8c6-kc4k2"] Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.413492 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d5bfcf8c6-kc4k2" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-api" containerID="cri-o://4a44237cfd0d60ec2f3e1bea49a3103357095b7c8190adec69560d7f5da7ad22" gracePeriod=30 Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.413811 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d5bfcf8c6-kc4k2" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-httpd" containerID="cri-o://b07e43790eb63088b25b9cde071bc5ee5d645be310612a221d6a2c645956adb4" gracePeriod=30 Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.684666 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.764783 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292489b8-e052-41ab-9648-a2113a58ca1b-operator-scripts\") pod \"292489b8-e052-41ab-9648-a2113a58ca1b\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.764902 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjm2\" (UniqueName: \"kubernetes.io/projected/292489b8-e052-41ab-9648-a2113a58ca1b-kube-api-access-xcjm2\") pod \"292489b8-e052-41ab-9648-a2113a58ca1b\" (UID: \"292489b8-e052-41ab-9648-a2113a58ca1b\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.765306 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292489b8-e052-41ab-9648-a2113a58ca1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "292489b8-e052-41ab-9648-a2113a58ca1b" (UID: "292489b8-e052-41ab-9648-a2113a58ca1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.765672 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292489b8-e052-41ab-9648-a2113a58ca1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.771809 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292489b8-e052-41ab-9648-a2113a58ca1b-kube-api-access-xcjm2" (OuterVolumeSpecName: "kube-api-access-xcjm2") pod "292489b8-e052-41ab-9648-a2113a58ca1b" (UID: "292489b8-e052-41ab-9648-a2113a58ca1b"). InnerVolumeSpecName "kube-api-access-xcjm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.867982 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjm2\" (UniqueName: \"kubernetes.io/projected/292489b8-e052-41ab-9648-a2113a58ca1b-kube-api-access-xcjm2\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.870087 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.897045 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.916903 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.919382 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.919690 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.972786 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee24504-84ec-4bd8-b29e-797fff4db145-operator-scripts\") pod \"4ee24504-84ec-4bd8-b29e-797fff4db145\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.972834 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-operator-scripts\") pod \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.972918 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n2q8\" (UniqueName: \"kubernetes.io/projected/33a20bca-2b21-4f25-a853-04533951ab18-kube-api-access-6n2q8\") pod \"33a20bca-2b21-4f25-a853-04533951ab18\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.972934 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e52d2e-62ff-40ba-9e10-a970927a8e47-operator-scripts\") pod \"85e52d2e-62ff-40ba-9e10-a970927a8e47\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973002 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20bca-2b21-4f25-a853-04533951ab18-operator-scripts\") pod \"33a20bca-2b21-4f25-a853-04533951ab18\" (UID: \"33a20bca-2b21-4f25-a853-04533951ab18\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973155 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4kg\" (UniqueName: \"kubernetes.io/projected/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-kube-api-access-nt4kg\") pod \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\" (UID: \"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973201 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjt5w\" (UniqueName: \"kubernetes.io/projected/4ee24504-84ec-4bd8-b29e-797fff4db145-kube-api-access-vjt5w\") pod \"4ee24504-84ec-4bd8-b29e-797fff4db145\" (UID: \"4ee24504-84ec-4bd8-b29e-797fff4db145\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973226 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gcsh\" (UniqueName: \"kubernetes.io/projected/87e9f925-c521-47fb-bc44-6243504b38ad-kube-api-access-7gcsh\") pod \"87e9f925-c521-47fb-bc44-6243504b38ad\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973244 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87e9f925-c521-47fb-bc44-6243504b38ad-operator-scripts\") pod \"87e9f925-c521-47fb-bc44-6243504b38ad\" (UID: \"87e9f925-c521-47fb-bc44-6243504b38ad\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973271 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwzgk\" (UniqueName: \"kubernetes.io/projected/85e52d2e-62ff-40ba-9e10-a970927a8e47-kube-api-access-xwzgk\") pod \"85e52d2e-62ff-40ba-9e10-a970927a8e47\" (UID: \"85e52d2e-62ff-40ba-9e10-a970927a8e47\") " Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.973685 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee24504-84ec-4bd8-b29e-797fff4db145-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ee24504-84ec-4bd8-b29e-797fff4db145" (UID: "4ee24504-84ec-4bd8-b29e-797fff4db145"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.974148 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" (UID: "c9b04c51-33d8-4c83-9d5e-e11f2e4cf035"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.974193 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e52d2e-62ff-40ba-9e10-a970927a8e47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85e52d2e-62ff-40ba-9e10-a970927a8e47" (UID: "85e52d2e-62ff-40ba-9e10-a970927a8e47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.974680 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20bca-2b21-4f25-a853-04533951ab18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33a20bca-2b21-4f25-a853-04533951ab18" (UID: "33a20bca-2b21-4f25-a853-04533951ab18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.974721 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e9f925-c521-47fb-bc44-6243504b38ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87e9f925-c521-47fb-bc44-6243504b38ad" (UID: "87e9f925-c521-47fb-bc44-6243504b38ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.978438 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee24504-84ec-4bd8-b29e-797fff4db145-kube-api-access-vjt5w" (OuterVolumeSpecName: "kube-api-access-vjt5w") pod "4ee24504-84ec-4bd8-b29e-797fff4db145" (UID: "4ee24504-84ec-4bd8-b29e-797fff4db145"). InnerVolumeSpecName "kube-api-access-vjt5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.979594 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e52d2e-62ff-40ba-9e10-a970927a8e47-kube-api-access-xwzgk" (OuterVolumeSpecName: "kube-api-access-xwzgk") pod "85e52d2e-62ff-40ba-9e10-a970927a8e47" (UID: "85e52d2e-62ff-40ba-9e10-a970927a8e47"). InnerVolumeSpecName "kube-api-access-xwzgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.979631 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a20bca-2b21-4f25-a853-04533951ab18-kube-api-access-6n2q8" (OuterVolumeSpecName: "kube-api-access-6n2q8") pod "33a20bca-2b21-4f25-a853-04533951ab18" (UID: "33a20bca-2b21-4f25-a853-04533951ab18"). InnerVolumeSpecName "kube-api-access-6n2q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.979687 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-kube-api-access-nt4kg" (OuterVolumeSpecName: "kube-api-access-nt4kg") pod "c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" (UID: "c9b04c51-33d8-4c83-9d5e-e11f2e4cf035"). InnerVolumeSpecName "kube-api-access-nt4kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:29 crc kubenswrapper[4922]: I0126 14:30:29.981305 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e9f925-c521-47fb-bc44-6243504b38ad-kube-api-access-7gcsh" (OuterVolumeSpecName: "kube-api-access-7gcsh") pod "87e9f925-c521-47fb-bc44-6243504b38ad" (UID: "87e9f925-c521-47fb-bc44-6243504b38ad"). InnerVolumeSpecName "kube-api-access-7gcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.045778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9807-account-create-update-gjf78" event={"ID":"292489b8-e052-41ab-9648-a2113a58ca1b","Type":"ContainerDied","Data":"24a3a70a4f095bf9ec793d58c252c2b8aa797283bfcc55a7f54da91c5d5afb33"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.045815 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24a3a70a4f095bf9ec793d58c252c2b8aa797283bfcc55a7f54da91c5d5afb33" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.045879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9807-account-create-update-gjf78" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.047369 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cv9zm" event={"ID":"87e9f925-c521-47fb-bc44-6243504b38ad","Type":"ContainerDied","Data":"866ec6627a4f82efe7251f9ec6eafafbe1f3fb4abbc7994a5e17fa2088de7c21"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.047395 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866ec6627a4f82efe7251f9ec6eafafbe1f3fb4abbc7994a5e17fa2088de7c21" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.047402 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv9zm" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.048737 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1213-account-create-update-x4gt5" event={"ID":"4ee24504-84ec-4bd8-b29e-797fff4db145","Type":"ContainerDied","Data":"7397af93e1aac2d33eaa8a3d8520d29c73c898fa50c0c429a5850b56ba7fd551"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.048757 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7397af93e1aac2d33eaa8a3d8520d29c73c898fa50c0c429a5850b56ba7fd551" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.048799 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1213-account-create-update-x4gt5" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.052196 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" event={"ID":"c9b04c51-33d8-4c83-9d5e-e11f2e4cf035","Type":"ContainerDied","Data":"f1e30df9fb45760d51638a327054ceca7d8ac12bc44a086a4b410d50b76aeced"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.052251 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1e30df9fb45760d51638a327054ceca7d8ac12bc44a086a4b410d50b76aeced" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.052215 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1cf0-account-create-update-mccq6" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.054481 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p7wdz" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.054498 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p7wdz" event={"ID":"33a20bca-2b21-4f25-a853-04533951ab18","Type":"ContainerDied","Data":"92ef41f203f893f64b820fbdb7c8cd6adb56374f5d9a8e6789462c4394f84107"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.054628 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ef41f203f893f64b820fbdb7c8cd6adb56374f5d9a8e6789462c4394f84107" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.059131 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rh9fp" event={"ID":"85e52d2e-62ff-40ba-9e10-a970927a8e47","Type":"ContainerDied","Data":"89d3edca82aa03168eb8b4bd28d20f06f3b80daeb8f01fccd085232928adad52"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.059157 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d3edca82aa03168eb8b4bd28d20f06f3b80daeb8f01fccd085232928adad52" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.059197 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rh9fp" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.066202 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerStarted","Data":"e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.067717 4922 generic.go:334] "Generic (PLEG): container finished" podID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerID="b07e43790eb63088b25b9cde071bc5ee5d645be310612a221d6a2c645956adb4" exitCode=0 Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.068109 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5bfcf8c6-kc4k2" event={"ID":"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5","Type":"ContainerDied","Data":"b07e43790eb63088b25b9cde071bc5ee5d645be310612a221d6a2c645956adb4"} Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075903 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n2q8\" (UniqueName: \"kubernetes.io/projected/33a20bca-2b21-4f25-a853-04533951ab18-kube-api-access-6n2q8\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075929 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e52d2e-62ff-40ba-9e10-a970927a8e47-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075938 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20bca-2b21-4f25-a853-04533951ab18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075947 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4kg\" (UniqueName: \"kubernetes.io/projected/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-kube-api-access-nt4kg\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075957 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjt5w\" (UniqueName: \"kubernetes.io/projected/4ee24504-84ec-4bd8-b29e-797fff4db145-kube-api-access-vjt5w\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075966 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gcsh\" (UniqueName: \"kubernetes.io/projected/87e9f925-c521-47fb-bc44-6243504b38ad-kube-api-access-7gcsh\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075974 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87e9f925-c521-47fb-bc44-6243504b38ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075983 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwzgk\" (UniqueName: \"kubernetes.io/projected/85e52d2e-62ff-40ba-9e10-a970927a8e47-kube-api-access-xwzgk\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075991 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ee24504-84ec-4bd8-b29e-797fff4db145-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:30 crc kubenswrapper[4922]: I0126 14:30:30.075999 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:32 crc kubenswrapper[4922]: I0126 14:30:32.087185 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerStarted","Data":"ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3"} Jan 26 14:30:32 crc kubenswrapper[4922]: I0126 14:30:32.087878 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:30:32 crc kubenswrapper[4922]: I0126 14:30:32.114141 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.415320624 podStartE2EDuration="6.114126376s" podCreationTimestamp="2026-01-26 14:30:26 +0000 UTC" firstStartedPulling="2026-01-26 14:30:28.109870416 +0000 UTC m=+1245.312133188" lastFinishedPulling="2026-01-26 14:30:30.808676178 +0000 UTC m=+1248.010938940" observedRunningTime="2026-01-26 14:30:32.108797173 +0000 UTC m=+1249.311059955" watchObservedRunningTime="2026-01-26 14:30:32.114126376 +0000 UTC m=+1249.316389148" Jan 26 14:30:33 crc kubenswrapper[4922]: I0126 14:30:33.554609 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:33 crc kubenswrapper[4922]: I0126 14:30:33.554880 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:33 crc kubenswrapper[4922]: I0126 14:30:33.555556 4922 scope.go:117] "RemoveContainer" containerID="1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d" Jan 26 14:30:34 crc kubenswrapper[4922]: I0126 14:30:34.110039 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerStarted","Data":"5ca5facc85e95b131fa76bad59638709c9e9f6dd000971be5d4754e9bbbf1eb6"} Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.477423 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.478012 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="proxy-httpd" containerID="cri-o://ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3" gracePeriod=30 Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.478040 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="sg-core" containerID="cri-o://e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76" gracePeriod=30 Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.478217 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-notification-agent" containerID="cri-o://a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d" gracePeriod=30 Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.479068 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-central-agent" containerID="cri-o://5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e" gracePeriod=30 Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948069 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j7942"] Jan 26 14:30:35 crc kubenswrapper[4922]: E0126 14:30:35.948786 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e9f925-c521-47fb-bc44-6243504b38ad" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948801 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e9f925-c521-47fb-bc44-6243504b38ad" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: E0126 14:30:35.948822 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292489b8-e052-41ab-9648-a2113a58ca1b" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948830 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="292489b8-e052-41ab-9648-a2113a58ca1b" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: E0126 14:30:35.948842 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e52d2e-62ff-40ba-9e10-a970927a8e47" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948851 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e52d2e-62ff-40ba-9e10-a970927a8e47" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: E0126 14:30:35.948860 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948867 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: E0126 14:30:35.948898 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee24504-84ec-4bd8-b29e-797fff4db145" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948906 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee24504-84ec-4bd8-b29e-797fff4db145" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: E0126 14:30:35.948916 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a20bca-2b21-4f25-a853-04533951ab18" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.948921 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a20bca-2b21-4f25-a853-04533951ab18" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.949099 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a20bca-2b21-4f25-a853-04533951ab18" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.949116 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="292489b8-e052-41ab-9648-a2113a58ca1b" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.949130 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.949136 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e52d2e-62ff-40ba-9e10-a970927a8e47" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.949154 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e9f925-c521-47fb-bc44-6243504b38ad" containerName="mariadb-database-create" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.949160 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee24504-84ec-4bd8-b29e-797fff4db145" containerName="mariadb-account-create-update" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.950448 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.952388 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.952393 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ljf2f" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.953251 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.968023 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j7942"] Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.988634 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htlvv\" (UniqueName: \"kubernetes.io/projected/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-kube-api-access-htlvv\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.988861 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-scripts\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.988993 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:35 crc kubenswrapper[4922]: I0126 14:30:35.989138 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-config-data\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.090908 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-config-data\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.090989 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htlvv\" (UniqueName: \"kubernetes.io/projected/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-kube-api-access-htlvv\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.091104 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-scripts\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.091171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.099006 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.099915 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-scripts\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.121916 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-config-data\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.127965 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htlvv\" (UniqueName: \"kubernetes.io/projected/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-kube-api-access-htlvv\") pod \"nova-cell0-conductor-db-sync-j7942\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.148466 4922 generic.go:334] "Generic (PLEG): container finished" podID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerID="ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3" exitCode=0 Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.148714 4922 generic.go:334] "Generic (PLEG): container finished" podID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerID="e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76" exitCode=2 Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.148820 4922 generic.go:334] "Generic (PLEG): container finished" podID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerID="5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e" exitCode=0 Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.149018 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerDied","Data":"ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3"} Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.149162 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerDied","Data":"e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76"} Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.149284 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerDied","Data":"5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e"} Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.156842 4922 generic.go:334] "Generic (PLEG): container finished" podID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerID="4a44237cfd0d60ec2f3e1bea49a3103357095b7c8190adec69560d7f5da7ad22" exitCode=0 Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.156888 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5bfcf8c6-kc4k2" event={"ID":"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5","Type":"ContainerDied","Data":"4a44237cfd0d60ec2f3e1bea49a3103357095b7c8190adec69560d7f5da7ad22"} Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.221146 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.267844 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.294614 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-httpd-config\") pod \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.294790 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxwxl\" (UniqueName: \"kubernetes.io/projected/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-kube-api-access-lxwxl\") pod \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.294864 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-ovndb-tls-certs\") pod \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.294908 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-combined-ca-bundle\") pod \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.295007 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-config\") pod \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\" (UID: \"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.298145 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" (UID: "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.305574 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-kube-api-access-lxwxl" (OuterVolumeSpecName: "kube-api-access-lxwxl") pod "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" (UID: "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5"). InnerVolumeSpecName "kube-api-access-lxwxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.357704 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" (UID: "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.385189 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-config" (OuterVolumeSpecName: "config") pod "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" (UID: "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.397525 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.397548 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.397557 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.397567 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxwxl\" (UniqueName: \"kubernetes.io/projected/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-kube-api-access-lxwxl\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.431219 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" (UID: "9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.498903 4922 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.757831 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.857817 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j7942"] Jan 26 14:30:36 crc kubenswrapper[4922]: W0126 14:30:36.862663 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod226a1df4_9c6e_48d7_9c7f_b1d06f797a65.slice/crio-624dfc3bfb8b3d097223c1d530c56e430d51edd697f6f26219bb6df2e113ab41 WatchSource:0}: Error finding container 624dfc3bfb8b3d097223c1d530c56e430d51edd697f6f26219bb6df2e113ab41: Status 404 returned error can't find the container with id 624dfc3bfb8b3d097223c1d530c56e430d51edd697f6f26219bb6df2e113ab41 Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.910627 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qwqp\" (UniqueName: \"kubernetes.io/projected/04d5ab2a-c747-4f88-9762-9371202cfa28-kube-api-access-9qwqp\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.910752 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-log-httpd\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.910816 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-ceilometer-tls-certs\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.910852 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-scripts\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.910887 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-sg-core-conf-yaml\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.910909 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-config-data\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.911043 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-combined-ca-bundle\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.911074 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-run-httpd\") pod \"04d5ab2a-c747-4f88-9762-9371202cfa28\" (UID: \"04d5ab2a-c747-4f88-9762-9371202cfa28\") " Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.912082 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.912853 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.918531 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d5ab2a-c747-4f88-9762-9371202cfa28-kube-api-access-9qwqp" (OuterVolumeSpecName: "kube-api-access-9qwqp") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "kube-api-access-9qwqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.935119 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-scripts" (OuterVolumeSpecName: "scripts") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:36 crc kubenswrapper[4922]: I0126 14:30:36.951579 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.013581 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.013614 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.013625 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qwqp\" (UniqueName: \"kubernetes.io/projected/04d5ab2a-c747-4f88-9762-9371202cfa28-kube-api-access-9qwqp\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.013635 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04d5ab2a-c747-4f88-9762-9371202cfa28-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.013644 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.017103 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.021795 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.063696 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-config-data" (OuterVolumeSpecName: "config-data") pod "04d5ab2a-c747-4f88-9762-9371202cfa28" (UID: "04d5ab2a-c747-4f88-9762-9371202cfa28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.116347 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.116381 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.116392 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d5ab2a-c747-4f88-9762-9371202cfa28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.166454 4922 generic.go:334] "Generic (PLEG): container finished" podID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerID="a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d" exitCode=0 Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.166514 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.166531 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerDied","Data":"a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d"} Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.167159 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04d5ab2a-c747-4f88-9762-9371202cfa28","Type":"ContainerDied","Data":"5e3200c2de60879b12a5cacc68573c67e29b4908e61be18d64ab5905b4ba484e"} Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.167180 4922 scope.go:117] "RemoveContainer" containerID="ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.171815 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5bfcf8c6-kc4k2" event={"ID":"9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5","Type":"ContainerDied","Data":"4365bf2aa6fa42306643ab8bb8ccb29f00bcc611b5756fac424abb647cb17548"} Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.171886 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5bfcf8c6-kc4k2" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.177519 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j7942" event={"ID":"226a1df4-9c6e-48d7-9c7f-b1d06f797a65","Type":"ContainerStarted","Data":"624dfc3bfb8b3d097223c1d530c56e430d51edd697f6f26219bb6df2e113ab41"} Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.195121 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.199613 4922 scope.go:117] "RemoveContainer" containerID="e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.203643 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.220922 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d5bfcf8c6-kc4k2"] Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.234128 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d5bfcf8c6-kc4k2"] Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.252380 4922 scope.go:117] "RemoveContainer" containerID="a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276284 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.276682 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="sg-core" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276698 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="sg-core" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.276713 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="proxy-httpd" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276719 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="proxy-httpd" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.276732 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-central-agent" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276738 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-central-agent" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.276745 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-api" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276750 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-api" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.276763 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-httpd" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276769 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-httpd" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.276791 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-notification-agent" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276797 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-notification-agent" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276988 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-httpd" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.276997 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-notification-agent" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.277011 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="sg-core" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.277022 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="proxy-httpd" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.277036 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" containerName="neutron-api" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.277046 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" containerName="ceilometer-central-agent" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.278885 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.283439 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.283549 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.283628 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.283692 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.291323 4922 scope.go:117] "RemoveContainer" containerID="5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.316748 4922 scope.go:117] "RemoveContainer" containerID="ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.317121 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3\": container with ID starting with ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3 not found: ID does not exist" containerID="ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.317234 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3"} err="failed to get container status \"ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3\": rpc error: code = NotFound desc = could not find container \"ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3\": container with ID starting with ba13ffc41342cc58eda96f62c4c54eaee60d77f57eacfdee0e5bad2d766f84b3 not found: ID does not exist" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.317320 4922 scope.go:117] "RemoveContainer" containerID="e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.319446 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76\": container with ID starting with e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76 not found: ID does not exist" containerID="e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.319523 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76"} err="failed to get container status \"e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76\": rpc error: code = NotFound desc = could not find container \"e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76\": container with ID starting with e906da56042d70a2fd88f4bf77c080bf6a9c49706b5359d7bae71063765a7e76 not found: ID does not exist" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.319598 4922 scope.go:117] "RemoveContainer" containerID="a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.320643 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d\": container with ID starting with a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d not found: ID does not exist" containerID="a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.320689 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d"} err="failed to get container status \"a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d\": rpc error: code = NotFound desc = could not find container \"a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d\": container with ID starting with a223eeca804250f525813e242a945b9e5e6fa55065180b13a436d016313c988d not found: ID does not exist" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.320713 4922 scope.go:117] "RemoveContainer" containerID="5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e" Jan 26 14:30:37 crc kubenswrapper[4922]: E0126 14:30:37.321646 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e\": container with ID starting with 5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e not found: ID does not exist" containerID="5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.321674 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e"} err="failed to get container status \"5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e\": rpc error: code = NotFound desc = could not find container \"5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e\": container with ID starting with 5243d0454c8212957db89a685400cc1ac3235b33f02dd3a43b582996302fd95e not found: ID does not exist" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.321691 4922 scope.go:117] "RemoveContainer" containerID="b07e43790eb63088b25b9cde071bc5ee5d645be310612a221d6a2c645956adb4" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.347108 4922 scope.go:117] "RemoveContainer" containerID="4a44237cfd0d60ec2f3e1bea49a3103357095b7c8190adec69560d7f5da7ad22" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.422052 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.422134 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-config-data\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.423043 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.423178 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-run-httpd\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.423215 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.423284 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-log-httpd\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.423318 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsc6h\" (UniqueName: \"kubernetes.io/projected/398dc2bd-cd9e-4504-8987-ace41488c890-kube-api-access-gsc6h\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.423387 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-scripts\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.524860 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.525894 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-log-httpd\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.526047 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsc6h\" (UniqueName: \"kubernetes.io/projected/398dc2bd-cd9e-4504-8987-ace41488c890-kube-api-access-gsc6h\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.526262 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-scripts\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.526424 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.526540 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-config-data\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.526652 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.526766 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-run-httpd\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.527668 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-log-httpd\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.530208 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.530276 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.530565 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-run-httpd\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.531722 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.531922 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-scripts\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.535227 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-config-data\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.535533 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.541396 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsc6h\" (UniqueName: \"kubernetes.io/projected/398dc2bd-cd9e-4504-8987-ace41488c890-kube-api-access-gsc6h\") pod \"ceilometer-0\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " pod="openstack/ceilometer-0" Jan 26 14:30:37 crc kubenswrapper[4922]: I0126 14:30:37.607482 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:38 crc kubenswrapper[4922]: I0126 14:30:38.139008 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:38 crc kubenswrapper[4922]: W0126 14:30:38.150019 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398dc2bd_cd9e_4504_8987_ace41488c890.slice/crio-981b38351ba0c592f9d55394ae50b1519e7cbac5333d4d62c066748e7d00fb93 WatchSource:0}: Error finding container 981b38351ba0c592f9d55394ae50b1519e7cbac5333d4d62c066748e7d00fb93: Status 404 returned error can't find the container with id 981b38351ba0c592f9d55394ae50b1519e7cbac5333d4d62c066748e7d00fb93 Jan 26 14:30:38 crc kubenswrapper[4922]: I0126 14:30:38.201810 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerStarted","Data":"981b38351ba0c592f9d55394ae50b1519e7cbac5333d4d62c066748e7d00fb93"} Jan 26 14:30:39 crc kubenswrapper[4922]: I0126 14:30:39.126204 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d5ab2a-c747-4f88-9762-9371202cfa28" path="/var/lib/kubelet/pods/04d5ab2a-c747-4f88-9762-9371202cfa28/volumes" Jan 26 14:30:39 crc kubenswrapper[4922]: I0126 14:30:39.128453 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5" path="/var/lib/kubelet/pods/9bc9a317-1b7c-4c53-b6c4-84b9fc4ee0b5/volumes" Jan 26 14:30:39 crc kubenswrapper[4922]: I0126 14:30:39.183724 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:39 crc kubenswrapper[4922]: I0126 14:30:39.215105 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerStarted","Data":"f7b8a869a2acfc12c7d7881981de245c5714e64cfb6166a5e00ce0b397a725eb"} Jan 26 14:30:39 crc kubenswrapper[4922]: I0126 14:30:39.215416 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerStarted","Data":"ef779525e5b2c03ea14d08a5008a0967729e0616a954aacb0605702149ab9b72"} Jan 26 14:30:40 crc kubenswrapper[4922]: I0126 14:30:40.224870 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerStarted","Data":"c4ee2a85560bdc3ea066e1b7d600110fff5823a541c2e89124ebd46d086784e8"} Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.234929 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerStarted","Data":"a4500e83e77c86eb0df7689ecef68faaffb3c5d98cc89e91d5eb2f439f14d0ba"} Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.236006 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.235768 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="proxy-httpd" containerID="cri-o://a4500e83e77c86eb0df7689ecef68faaffb3c5d98cc89e91d5eb2f439f14d0ba" gracePeriod=30 Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.235787 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="sg-core" containerID="cri-o://c4ee2a85560bdc3ea066e1b7d600110fff5823a541c2e89124ebd46d086784e8" gracePeriod=30 Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.235801 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-notification-agent" containerID="cri-o://f7b8a869a2acfc12c7d7881981de245c5714e64cfb6166a5e00ce0b397a725eb" gracePeriod=30 Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.235120 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-central-agent" containerID="cri-o://ef779525e5b2c03ea14d08a5008a0967729e0616a954aacb0605702149ab9b72" gracePeriod=30 Jan 26 14:30:41 crc kubenswrapper[4922]: I0126 14:30:41.261358 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.744664213 podStartE2EDuration="4.261341895s" podCreationTimestamp="2026-01-26 14:30:37 +0000 UTC" firstStartedPulling="2026-01-26 14:30:38.155284417 +0000 UTC m=+1255.357547179" lastFinishedPulling="2026-01-26 14:30:40.671962089 +0000 UTC m=+1257.874224861" observedRunningTime="2026-01-26 14:30:41.252733214 +0000 UTC m=+1258.454995986" watchObservedRunningTime="2026-01-26 14:30:41.261341895 +0000 UTC m=+1258.463604667" Jan 26 14:30:42 crc kubenswrapper[4922]: I0126 14:30:42.247243 4922 generic.go:334] "Generic (PLEG): container finished" podID="398dc2bd-cd9e-4504-8987-ace41488c890" containerID="a4500e83e77c86eb0df7689ecef68faaffb3c5d98cc89e91d5eb2f439f14d0ba" exitCode=0 Jan 26 14:30:42 crc kubenswrapper[4922]: I0126 14:30:42.247578 4922 generic.go:334] "Generic (PLEG): container finished" podID="398dc2bd-cd9e-4504-8987-ace41488c890" containerID="c4ee2a85560bdc3ea066e1b7d600110fff5823a541c2e89124ebd46d086784e8" exitCode=2 Jan 26 14:30:42 crc kubenswrapper[4922]: I0126 14:30:42.247589 4922 generic.go:334] "Generic (PLEG): container finished" podID="398dc2bd-cd9e-4504-8987-ace41488c890" containerID="f7b8a869a2acfc12c7d7881981de245c5714e64cfb6166a5e00ce0b397a725eb" exitCode=0 Jan 26 14:30:42 crc kubenswrapper[4922]: I0126 14:30:42.247392 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerDied","Data":"a4500e83e77c86eb0df7689ecef68faaffb3c5d98cc89e91d5eb2f439f14d0ba"} Jan 26 14:30:42 crc kubenswrapper[4922]: I0126 14:30:42.247623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerDied","Data":"c4ee2a85560bdc3ea066e1b7d600110fff5823a541c2e89124ebd46d086784e8"} Jan 26 14:30:42 crc kubenswrapper[4922]: I0126 14:30:42.247637 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerDied","Data":"f7b8a869a2acfc12c7d7881981de245c5714e64cfb6166a5e00ce0b397a725eb"} Jan 26 14:30:43 crc kubenswrapper[4922]: I0126 14:30:43.554870 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:43 crc kubenswrapper[4922]: I0126 14:30:43.584870 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:44 crc kubenswrapper[4922]: I0126 14:30:44.264133 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:44 crc kubenswrapper[4922]: I0126 14:30:44.308215 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:44 crc kubenswrapper[4922]: I0126 14:30:44.574995 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:30:46 crc kubenswrapper[4922]: I0126 14:30:46.283123 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" containerID="cri-o://5ca5facc85e95b131fa76bad59638709c9e9f6dd000971be5d4754e9bbbf1eb6" gracePeriod=30 Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.312225 4922 generic.go:334] "Generic (PLEG): container finished" podID="398dc2bd-cd9e-4504-8987-ace41488c890" containerID="ef779525e5b2c03ea14d08a5008a0967729e0616a954aacb0605702149ab9b72" exitCode=0 Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.312405 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerDied","Data":"ef779525e5b2c03ea14d08a5008a0967729e0616a954aacb0605702149ab9b72"} Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.704713 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsc6h\" (UniqueName: \"kubernetes.io/projected/398dc2bd-cd9e-4504-8987-ace41488c890-kube-api-access-gsc6h\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779259 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-log-httpd\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779283 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-sg-core-conf-yaml\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779330 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-scripts\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779420 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-config-data\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779466 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-run-httpd\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779491 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-ceilometer-tls-certs\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.779510 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-combined-ca-bundle\") pod \"398dc2bd-cd9e-4504-8987-ace41488c890\" (UID: \"398dc2bd-cd9e-4504-8987-ace41488c890\") " Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.780760 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.781023 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.785010 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398dc2bd-cd9e-4504-8987-ace41488c890-kube-api-access-gsc6h" (OuterVolumeSpecName: "kube-api-access-gsc6h") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "kube-api-access-gsc6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.785309 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-scripts" (OuterVolumeSpecName: "scripts") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.805959 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.854782 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.882054 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.882118 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.882131 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.882147 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsc6h\" (UniqueName: \"kubernetes.io/projected/398dc2bd-cd9e-4504-8987-ace41488c890-kube-api-access-gsc6h\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.882163 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/398dc2bd-cd9e-4504-8987-ace41488c890-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.882177 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.885864 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.913951 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-config-data" (OuterVolumeSpecName: "config-data") pod "398dc2bd-cd9e-4504-8987-ace41488c890" (UID: "398dc2bd-cd9e-4504-8987-ace41488c890"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.984028 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:49 crc kubenswrapper[4922]: I0126 14:30:49.984320 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/398dc2bd-cd9e-4504-8987-ace41488c890-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.322640 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.322660 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"398dc2bd-cd9e-4504-8987-ace41488c890","Type":"ContainerDied","Data":"981b38351ba0c592f9d55394ae50b1519e7cbac5333d4d62c066748e7d00fb93"} Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.322798 4922 scope.go:117] "RemoveContainer" containerID="a4500e83e77c86eb0df7689ecef68faaffb3c5d98cc89e91d5eb2f439f14d0ba" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.324730 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j7942" event={"ID":"226a1df4-9c6e-48d7-9c7f-b1d06f797a65","Type":"ContainerStarted","Data":"d222c4e696659287b31a26a1d7a2454dde709d9bdafd05e138d116619c5b2a98"} Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.356785 4922 scope.go:117] "RemoveContainer" containerID="c4ee2a85560bdc3ea066e1b7d600110fff5823a541c2e89124ebd46d086784e8" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.380739 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-j7942" podStartSLOduration=2.747137258 podStartE2EDuration="15.380717267s" podCreationTimestamp="2026-01-26 14:30:35 +0000 UTC" firstStartedPulling="2026-01-26 14:30:36.865861029 +0000 UTC m=+1254.068123801" lastFinishedPulling="2026-01-26 14:30:49.499441038 +0000 UTC m=+1266.701703810" observedRunningTime="2026-01-26 14:30:50.356786176 +0000 UTC m=+1267.559048958" watchObservedRunningTime="2026-01-26 14:30:50.380717267 +0000 UTC m=+1267.582980059" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.381096 4922 scope.go:117] "RemoveContainer" containerID="f7b8a869a2acfc12c7d7881981de245c5714e64cfb6166a5e00ce0b397a725eb" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.381481 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.399877 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.407435 4922 scope.go:117] "RemoveContainer" containerID="ef779525e5b2c03ea14d08a5008a0967729e0616a954aacb0605702149ab9b72" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418112 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:50 crc kubenswrapper[4922]: E0126 14:30:50.418540 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-central-agent" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418556 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-central-agent" Jan 26 14:30:50 crc kubenswrapper[4922]: E0126 14:30:50.418573 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="proxy-httpd" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418579 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="proxy-httpd" Jan 26 14:30:50 crc kubenswrapper[4922]: E0126 14:30:50.418606 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="sg-core" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418613 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="sg-core" Jan 26 14:30:50 crc kubenswrapper[4922]: E0126 14:30:50.418627 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-notification-agent" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418634 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-notification-agent" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418820 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="sg-core" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418835 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-central-agent" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418848 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="ceilometer-notification-agent" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.418863 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" containerName="proxy-httpd" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.420493 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.426185 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.426382 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.426612 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.445987 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.446226 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-log" containerID="cri-o://be148c1a2cbb208f14758738600f7170e55f9f18a3c3240a5d7315b021c0c849" gracePeriod=30 Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.446361 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-httpd" containerID="cri-o://97c12d126b31eb8fbe57a841922a2a1ebe3236d06bf9788ae8ef3ec250d47c8e" gracePeriod=30 Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.489060 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495013 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495058 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-scripts\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495139 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-config-data\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495161 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495190 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgmd\" (UniqueName: \"kubernetes.io/projected/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-kube-api-access-rdgmd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495241 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495259 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.495289 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597405 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597449 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-scripts\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597518 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-config-data\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597545 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597576 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgmd\" (UniqueName: \"kubernetes.io/projected/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-kube-api-access-rdgmd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597627 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597645 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.597669 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.598149 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.598380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.603853 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.604088 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-config-data\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.605476 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.610598 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.618570 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-scripts\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.619419 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgmd\" (UniqueName: \"kubernetes.io/projected/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-kube-api-access-rdgmd\") pod \"ceilometer-0\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " pod="openstack/ceilometer-0" Jan 26 14:30:50 crc kubenswrapper[4922]: I0126 14:30:50.745213 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:30:51 crc kubenswrapper[4922]: I0126 14:30:51.105003 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398dc2bd-cd9e-4504-8987-ace41488c890" path="/var/lib/kubelet/pods/398dc2bd-cd9e-4504-8987-ace41488c890/volumes" Jan 26 14:30:51 crc kubenswrapper[4922]: I0126 14:30:51.252034 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:51 crc kubenswrapper[4922]: I0126 14:30:51.335095 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerStarted","Data":"2ad4d25d96f32c595960721698b03b58301adc7e5a261ca61af13eb3a44180fa"} Jan 26 14:30:51 crc kubenswrapper[4922]: I0126 14:30:51.338004 4922 generic.go:334] "Generic (PLEG): container finished" podID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerID="be148c1a2cbb208f14758738600f7170e55f9f18a3c3240a5d7315b021c0c849" exitCode=143 Jan 26 14:30:51 crc kubenswrapper[4922]: I0126 14:30:51.338154 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a1d1b1d-b797-496c-b42e-d4b66f59115c","Type":"ContainerDied","Data":"be148c1a2cbb208f14758738600f7170e55f9f18a3c3240a5d7315b021c0c849"} Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.245663 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.248293 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-log" containerID="cri-o://de114b6e89954adbd7951400b69a850f023d66b3c83c1a444b73a52fddd6e339" gracePeriod=30 Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.248717 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-httpd" containerID="cri-o://17265ebaadc079245d56399e4c3412ded4e8a7eddb5ca58e74c8f7510fdbee74" gracePeriod=30 Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.351487 4922 generic.go:334] "Generic (PLEG): container finished" podID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerID="5ca5facc85e95b131fa76bad59638709c9e9f6dd000971be5d4754e9bbbf1eb6" exitCode=0 Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.351570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerDied","Data":"5ca5facc85e95b131fa76bad59638709c9e9f6dd000971be5d4754e9bbbf1eb6"} Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.351627 4922 scope.go:117] "RemoveContainer" containerID="1ca8b620f69935ab95ade17bc4baf313c05a4f0644c2e9c33806b155479d339d" Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.356322 4922 generic.go:334] "Generic (PLEG): container finished" podID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerID="97c12d126b31eb8fbe57a841922a2a1ebe3236d06bf9788ae8ef3ec250d47c8e" exitCode=0 Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.356358 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a1d1b1d-b797-496c-b42e-d4b66f59115c","Type":"ContainerDied","Data":"97c12d126b31eb8fbe57a841922a2a1ebe3236d06bf9788ae8ef3ec250d47c8e"} Jan 26 14:30:52 crc kubenswrapper[4922]: E0126 14:30:52.405288 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1d1b1d_b797_496c_b42e_d4b66f59115c.slice/crio-conmon-97c12d126b31eb8fbe57a841922a2a1ebe3236d06bf9788ae8ef3ec250d47c8e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1678095e_0a1d_4199_90c6_ea3afc879e0b.slice/crio-conmon-5ca5facc85e95b131fa76bad59638709c9e9f6dd000971be5d4754e9bbbf1eb6.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.869508 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.971195 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-custom-prometheus-ca\") pod \"1678095e-0a1d-4199-90c6-ea3afc879e0b\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.971276 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-config-data\") pod \"1678095e-0a1d-4199-90c6-ea3afc879e0b\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.971317 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfp28\" (UniqueName: \"kubernetes.io/projected/1678095e-0a1d-4199-90c6-ea3afc879e0b-kube-api-access-cfp28\") pod \"1678095e-0a1d-4199-90c6-ea3afc879e0b\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.971481 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-combined-ca-bundle\") pod \"1678095e-0a1d-4199-90c6-ea3afc879e0b\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.971573 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1678095e-0a1d-4199-90c6-ea3afc879e0b-logs\") pod \"1678095e-0a1d-4199-90c6-ea3afc879e0b\" (UID: \"1678095e-0a1d-4199-90c6-ea3afc879e0b\") " Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.972277 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1678095e-0a1d-4199-90c6-ea3afc879e0b-logs" (OuterVolumeSpecName: "logs") pod "1678095e-0a1d-4199-90c6-ea3afc879e0b" (UID: "1678095e-0a1d-4199-90c6-ea3afc879e0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:52 crc kubenswrapper[4922]: I0126 14:30:52.976780 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1678095e-0a1d-4199-90c6-ea3afc879e0b-kube-api-access-cfp28" (OuterVolumeSpecName: "kube-api-access-cfp28") pod "1678095e-0a1d-4199-90c6-ea3afc879e0b" (UID: "1678095e-0a1d-4199-90c6-ea3afc879e0b"). InnerVolumeSpecName "kube-api-access-cfp28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.013047 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1678095e-0a1d-4199-90c6-ea3afc879e0b" (UID: "1678095e-0a1d-4199-90c6-ea3afc879e0b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.023437 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1678095e-0a1d-4199-90c6-ea3afc879e0b" (UID: "1678095e-0a1d-4199-90c6-ea3afc879e0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.055467 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-config-data" (OuterVolumeSpecName: "config-data") pod "1678095e-0a1d-4199-90c6-ea3afc879e0b" (UID: "1678095e-0a1d-4199-90c6-ea3afc879e0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.076279 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.076312 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfp28\" (UniqueName: \"kubernetes.io/projected/1678095e-0a1d-4199-90c6-ea3afc879e0b-kube-api-access-cfp28\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.076324 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.076332 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1678095e-0a1d-4199-90c6-ea3afc879e0b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.076341 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1678095e-0a1d-4199-90c6-ea3afc879e0b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.078945 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179516 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-combined-ca-bundle\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179567 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-httpd-run\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179709 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-config-data\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179757 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78m8q\" (UniqueName: \"kubernetes.io/projected/4a1d1b1d-b797-496c-b42e-d4b66f59115c-kube-api-access-78m8q\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-public-tls-certs\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179864 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-scripts\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179892 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-logs\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.179905 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\" (UID: \"4a1d1b1d-b797-496c-b42e-d4b66f59115c\") " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.184173 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-logs" (OuterVolumeSpecName: "logs") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.184302 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.188942 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1d1b1d-b797-496c-b42e-d4b66f59115c-kube-api-access-78m8q" (OuterVolumeSpecName: "kube-api-access-78m8q") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "kube-api-access-78m8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.193209 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-scripts" (OuterVolumeSpecName: "scripts") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.194214 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.258835 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.272693 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-config-data" (OuterVolumeSpecName: "config-data") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.282875 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78m8q\" (UniqueName: \"kubernetes.io/projected/4a1d1b1d-b797-496c-b42e-d4b66f59115c-kube-api-access-78m8q\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.282908 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.282932 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.282959 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.282970 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.282979 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a1d1b1d-b797-496c-b42e-d4b66f59115c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.283000 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.287137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4a1d1b1d-b797-496c-b42e-d4b66f59115c" (UID: "4a1d1b1d-b797-496c-b42e-d4b66f59115c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.303052 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.366015 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4a1d1b1d-b797-496c-b42e-d4b66f59115c","Type":"ContainerDied","Data":"8071cb8d07cbf7cd136fde30e7dae0dbfdbf62759b5353be5bad0895eeabd8df"} Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.366076 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.366093 4922 scope.go:117] "RemoveContainer" containerID="97c12d126b31eb8fbe57a841922a2a1ebe3236d06bf9788ae8ef3ec250d47c8e" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.369763 4922 generic.go:334] "Generic (PLEG): container finished" podID="00524d24-79f3-444a-b95b-1cb294892c78" containerID="de114b6e89954adbd7951400b69a850f023d66b3c83c1a444b73a52fddd6e339" exitCode=143 Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.369832 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00524d24-79f3-444a-b95b-1cb294892c78","Type":"ContainerDied","Data":"de114b6e89954adbd7951400b69a850f023d66b3c83c1a444b73a52fddd6e339"} Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.374833 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1678095e-0a1d-4199-90c6-ea3afc879e0b","Type":"ContainerDied","Data":"02c1fbe28400642a12b950957b5ab56000f825977cf38a17977628f451180fc2"} Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.374908 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.386564 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.386590 4922 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1d1b1d-b797-496c-b42e-d4b66f59115c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.387562 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerStarted","Data":"888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df"} Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.387598 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerStarted","Data":"b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6"} Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.429123 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.442758 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.444131 4922 scope.go:117] "RemoveContainer" containerID="be148c1a2cbb208f14758738600f7170e55f9f18a3c3240a5d7315b021c0c849" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.455198 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.481787 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.482637 4922 scope.go:117] "RemoveContainer" containerID="5ca5facc85e95b131fa76bad59638709c9e9f6dd000971be5d4754e9bbbf1eb6" Jan 26 14:30:53 crc kubenswrapper[4922]: E0126 14:30:53.482661 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-httpd" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.482911 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-httpd" Jan 26 14:30:53 crc kubenswrapper[4922]: E0126 14:30:53.483027 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.483123 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: E0126 14:30:53.483225 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.483307 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: E0126 14:30:53.483387 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-log" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.483459 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-log" Jan 26 14:30:53 crc kubenswrapper[4922]: E0126 14:30:53.483555 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.483634 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.484030 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.484149 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.484232 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.484304 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.484384 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-log" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.484463 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" containerName="glance-httpd" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.485405 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.489035 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.489078 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.520198 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.558296 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.576125 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: E0126 14:30:53.576801 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.576814 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" containerName="watcher-decision-engine" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.592392 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlgl\" (UniqueName: \"kubernetes.io/projected/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-kube-api-access-pxlgl\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.592523 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.592611 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.592677 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.592727 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-logs\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.628030 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.629349 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.632492 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.642683 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694327 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694393 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694429 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694482 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694532 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-logs\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694584 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxlgl\" (UniqueName: \"kubernetes.io/projected/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-kube-api-access-pxlgl\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694741 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694768 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a055cb9-0f37-4772-a2af-63c1517cb256-logs\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.694869 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a055cb9-0f37-4772-a2af-63c1517cb256-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.695027 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-logs\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.695108 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298ls\" (UniqueName: \"kubernetes.io/projected/6a055cb9-0f37-4772-a2af-63c1517cb256-kube-api-access-298ls\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.695151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.698542 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.698758 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.702490 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.711562 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxlgl\" (UniqueName: \"kubernetes.io/projected/94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2-kube-api-access-pxlgl\") pod \"watcher-decision-engine-0\" (UID: \"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2\") " pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.797877 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798431 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a055cb9-0f37-4772-a2af-63c1517cb256-logs\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798503 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a055cb9-0f37-4772-a2af-63c1517cb256-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798525 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298ls\" (UniqueName: \"kubernetes.io/projected/6a055cb9-0f37-4772-a2af-63c1517cb256-kube-api-access-298ls\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798549 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798567 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798584 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798657 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.798878 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a055cb9-0f37-4772-a2af-63c1517cb256-logs\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.799277 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.799654 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a055cb9-0f37-4772-a2af-63c1517cb256-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.806335 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.806917 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.807030 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.809551 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a055cb9-0f37-4772-a2af-63c1517cb256-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.820812 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298ls\" (UniqueName: \"kubernetes.io/projected/6a055cb9-0f37-4772-a2af-63c1517cb256-kube-api-access-298ls\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.834997 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6a055cb9-0f37-4772-a2af-63c1517cb256\") " pod="openstack/glance-default-external-api-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.947955 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 14:30:53 crc kubenswrapper[4922]: I0126 14:30:53.972498 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.399378 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerStarted","Data":"62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f"} Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.405273 4922 generic.go:334] "Generic (PLEG): container finished" podID="00524d24-79f3-444a-b95b-1cb294892c78" containerID="17265ebaadc079245d56399e4c3412ded4e8a7eddb5ca58e74c8f7510fdbee74" exitCode=0 Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.405355 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00524d24-79f3-444a-b95b-1cb294892c78","Type":"ContainerDied","Data":"17265ebaadc079245d56399e4c3412ded4e8a7eddb5ca58e74c8f7510fdbee74"} Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.441920 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.652257 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.658812 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.713950 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-logs\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714616 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-combined-ca-bundle\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714725 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-internal-tls-certs\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714826 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-config-data\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714863 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmtzp\" (UniqueName: \"kubernetes.io/projected/00524d24-79f3-444a-b95b-1cb294892c78-kube-api-access-pmtzp\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714956 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-httpd-run\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.714989 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-scripts\") pod \"00524d24-79f3-444a-b95b-1cb294892c78\" (UID: \"00524d24-79f3-444a-b95b-1cb294892c78\") " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.718769 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-logs" (OuterVolumeSpecName: "logs") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.726916 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.728695 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-scripts" (OuterVolumeSpecName: "scripts") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.743945 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.752593 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00524d24-79f3-444a-b95b-1cb294892c78-kube-api-access-pmtzp" (OuterVolumeSpecName: "kube-api-access-pmtzp") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "kube-api-access-pmtzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.777123 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.818222 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819353 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819370 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819395 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819405 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819414 4922 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819426 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmtzp\" (UniqueName: \"kubernetes.io/projected/00524d24-79f3-444a-b95b-1cb294892c78-kube-api-access-pmtzp\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.819435 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00524d24-79f3-444a-b95b-1cb294892c78-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.840828 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.841003 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-config-data" (OuterVolumeSpecName: "config-data") pod "00524d24-79f3-444a-b95b-1cb294892c78" (UID: "00524d24-79f3-444a-b95b-1cb294892c78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.921156 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:54 crc kubenswrapper[4922]: I0126 14:30:54.921189 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00524d24-79f3-444a-b95b-1cb294892c78-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.102944 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1678095e-0a1d-4199-90c6-ea3afc879e0b" path="/var/lib/kubelet/pods/1678095e-0a1d-4199-90c6-ea3afc879e0b/volumes" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.103750 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1d1b1d-b797-496c-b42e-d4b66f59115c" path="/var/lib/kubelet/pods/4a1d1b1d-b797-496c-b42e-d4b66f59115c/volumes" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.440610 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a055cb9-0f37-4772-a2af-63c1517cb256","Type":"ContainerStarted","Data":"4315851d8da4cbb3befeefbfa8bcaab96d41f05d4501afbbeb795fd2025a78da"} Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.440851 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a055cb9-0f37-4772-a2af-63c1517cb256","Type":"ContainerStarted","Data":"8d56a825f24889cbed369d0550535d109ae9a277f8bc96fe6aa4031abb6c133e"} Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.463257 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2","Type":"ContainerStarted","Data":"0f1bb8e430d96b7965eeef75df5f43d304ea500ac7d3021b3eeb21cf20a5c700"} Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.463298 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2","Type":"ContainerStarted","Data":"19fb09232781e909d247219e98821674048f31c53ccc00ba6c4342653f24c4d3"} Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.499451 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00524d24-79f3-444a-b95b-1cb294892c78","Type":"ContainerDied","Data":"dd2ac07e4b6daf81c1f72fd05663a09ce881c42e2b562d749f540b4c2c2c7bd8"} Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.499501 4922 scope.go:117] "RemoveContainer" containerID="17265ebaadc079245d56399e4c3412ded4e8a7eddb5ca58e74c8f7510fdbee74" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.499637 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.563808 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.563781902 podStartE2EDuration="2.563781902s" podCreationTimestamp="2026-01-26 14:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:55.500390913 +0000 UTC m=+1272.702653685" watchObservedRunningTime="2026-01-26 14:30:55.563781902 +0000 UTC m=+1272.766044674" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.566699 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.589216 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.606220 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:55 crc kubenswrapper[4922]: E0126 14:30:55.606652 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-httpd" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.606664 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-httpd" Jan 26 14:30:55 crc kubenswrapper[4922]: E0126 14:30:55.606686 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-log" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.606692 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-log" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.606879 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-httpd" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.606904 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="00524d24-79f3-444a-b95b-1cb294892c78" containerName="glance-log" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.607909 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.610282 4922 scope.go:117] "RemoveContainer" containerID="de114b6e89954adbd7951400b69a850f023d66b3c83c1a444b73a52fddd6e339" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.620855 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.621050 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.640637 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653118 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653182 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653249 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjcpg\" (UniqueName: \"kubernetes.io/projected/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-kube-api-access-pjcpg\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653273 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653293 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653328 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-logs\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.653364 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755709 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755767 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755879 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjcpg\" (UniqueName: \"kubernetes.io/projected/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-kube-api-access-pjcpg\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755900 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755914 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755957 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-logs\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.755991 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.756659 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.756970 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.759404 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-logs\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.762643 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.766110 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.767750 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.768195 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.774219 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjcpg\" (UniqueName: \"kubernetes.io/projected/abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9-kube-api-access-pjcpg\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.794846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9\") " pod="openstack/glance-default-internal-api-0" Jan 26 14:30:55 crc kubenswrapper[4922]: I0126 14:30:55.930387 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.519422 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a055cb9-0f37-4772-a2af-63c1517cb256","Type":"ContainerStarted","Data":"bbe8216efbbb3ae41b76d188f506f1e59256abb15140a4cf3c001c46e0edaec8"} Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.538187 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerStarted","Data":"9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917"} Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.538728 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-central-agent" containerID="cri-o://b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6" gracePeriod=30 Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.539025 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.539098 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="proxy-httpd" containerID="cri-o://9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917" gracePeriod=30 Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.539166 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="sg-core" containerID="cri-o://62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f" gracePeriod=30 Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.539227 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-notification-agent" containerID="cri-o://888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df" gracePeriod=30 Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.553170 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.553151928 podStartE2EDuration="3.553151928s" podCreationTimestamp="2026-01-26 14:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:56.539455311 +0000 UTC m=+1273.741718083" watchObservedRunningTime="2026-01-26 14:30:56.553151928 +0000 UTC m=+1273.755414700" Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.564316 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.658914657 podStartE2EDuration="6.564299747s" podCreationTimestamp="2026-01-26 14:30:50 +0000 UTC" firstStartedPulling="2026-01-26 14:30:51.259254634 +0000 UTC m=+1268.461517406" lastFinishedPulling="2026-01-26 14:30:55.164639724 +0000 UTC m=+1272.366902496" observedRunningTime="2026-01-26 14:30:56.563809134 +0000 UTC m=+1273.766071906" watchObservedRunningTime="2026-01-26 14:30:56.564299747 +0000 UTC m=+1273.766562519" Jan 26 14:30:56 crc kubenswrapper[4922]: I0126 14:30:56.607763 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.105831 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00524d24-79f3-444a-b95b-1cb294892c78" path="/var/lib/kubelet/pods/00524d24-79f3-444a-b95b-1cb294892c78/volumes" Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.560530 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9","Type":"ContainerStarted","Data":"bb56566c8d215776ba051fff57f4c7c5a98f2502434f7eb2ca4def93f4b05afa"} Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.560788 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9","Type":"ContainerStarted","Data":"333ecb7dfb111b8a3fee52246c3a9ee6ce3dac507c847179a8897c8305e1adea"} Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.563274 4922 generic.go:334] "Generic (PLEG): container finished" podID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerID="9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917" exitCode=0 Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.563313 4922 generic.go:334] "Generic (PLEG): container finished" podID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerID="62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f" exitCode=2 Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.563322 4922 generic.go:334] "Generic (PLEG): container finished" podID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerID="888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df" exitCode=0 Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.564560 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerDied","Data":"9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917"} Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.564623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerDied","Data":"62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f"} Jan 26 14:30:57 crc kubenswrapper[4922]: I0126 14:30:57.564639 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerDied","Data":"888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df"} Jan 26 14:30:58 crc kubenswrapper[4922]: I0126 14:30:58.571855 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9","Type":"ContainerStarted","Data":"f236e72d72614b3041de3e868df4d34957e92fc052b03dc06068c5fead1c4e27"} Jan 26 14:30:58 crc kubenswrapper[4922]: I0126 14:30:58.604491 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.604472146 podStartE2EDuration="3.604472146s" podCreationTimestamp="2026-01-26 14:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:30:58.591567 +0000 UTC m=+1275.793829772" watchObservedRunningTime="2026-01-26 14:30:58.604472146 +0000 UTC m=+1275.806734928" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.490195 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.619178 4922 generic.go:334] "Generic (PLEG): container finished" podID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerID="b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6" exitCode=0 Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.619221 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerDied","Data":"b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6"} Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.619245 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7","Type":"ContainerDied","Data":"2ad4d25d96f32c595960721698b03b58301adc7e5a261ca61af13eb3a44180fa"} Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.619262 4922 scope.go:117] "RemoveContainer" containerID="9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.619289 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.656046 4922 scope.go:117] "RemoveContainer" containerID="62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.666672 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-config-data\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.666801 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-run-httpd\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.666825 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-sg-core-conf-yaml\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.666878 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-log-httpd\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.666926 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdgmd\" (UniqueName: \"kubernetes.io/projected/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-kube-api-access-rdgmd\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.666973 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-ceilometer-tls-certs\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.667049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-combined-ca-bundle\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.667211 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-scripts\") pod \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\" (UID: \"5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7\") " Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.667471 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.667532 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.668268 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.668314 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.677226 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-kube-api-access-rdgmd" (OuterVolumeSpecName: "kube-api-access-rdgmd") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "kube-api-access-rdgmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.696306 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-scripts" (OuterVolumeSpecName: "scripts") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.696520 4922 scope.go:117] "RemoveContainer" containerID="888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.707798 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.751275 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.763879 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.778547 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdgmd\" (UniqueName: \"kubernetes.io/projected/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-kube-api-access-rdgmd\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.778591 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.778604 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.778617 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.778629 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.782649 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-config-data" (OuterVolumeSpecName: "config-data") pod "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" (UID: "5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.803616 4922 scope.go:117] "RemoveContainer" containerID="b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.826554 4922 scope.go:117] "RemoveContainer" containerID="9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.828237 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917\": container with ID starting with 9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917 not found: ID does not exist" containerID="9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.828363 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917"} err="failed to get container status \"9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917\": rpc error: code = NotFound desc = could not find container \"9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917\": container with ID starting with 9893cd293e2845ef35ecdfa2a601567021619972385dd32d0f855fd17e3c7917 not found: ID does not exist" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.828466 4922 scope.go:117] "RemoveContainer" containerID="62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.829200 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f\": container with ID starting with 62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f not found: ID does not exist" containerID="62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.829340 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f"} err="failed to get container status \"62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f\": rpc error: code = NotFound desc = could not find container \"62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f\": container with ID starting with 62e78d07080134feee0427edd4f7b7c5fadfd5e8f55e5df86f45dc8a8a267a6f not found: ID does not exist" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.829427 4922 scope.go:117] "RemoveContainer" containerID="888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.829851 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df\": container with ID starting with 888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df not found: ID does not exist" containerID="888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.829927 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df"} err="failed to get container status \"888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df\": rpc error: code = NotFound desc = could not find container \"888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df\": container with ID starting with 888b5614706217164db3a46d678b60fc7b68d20424165f710e03f11d8a9f65df not found: ID does not exist" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.829976 4922 scope.go:117] "RemoveContainer" containerID="b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.830523 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6\": container with ID starting with b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6 not found: ID does not exist" containerID="b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.830597 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6"} err="failed to get container status \"b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6\": rpc error: code = NotFound desc = could not find container \"b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6\": container with ID starting with b64e3cdf1ab2458ab9d37df3298d982e402efe6db37eda471ba3380aa5ff8ba6 not found: ID does not exist" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.880625 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.956602 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.965508 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.983000 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.983609 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-notification-agent" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.983678 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-notification-agent" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.983736 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-central-agent" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.983795 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-central-agent" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.983876 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="proxy-httpd" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.983928 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="proxy-httpd" Jan 26 14:31:01 crc kubenswrapper[4922]: E0126 14:31:01.983995 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="sg-core" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.984051 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="sg-core" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.984306 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="sg-core" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.984370 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="proxy-httpd" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.984430 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-central-agent" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.984491 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" containerName="ceilometer-notification-agent" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.986173 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:31:01 crc kubenswrapper[4922]: I0126 14:31:01.991532 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.000229 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.001647 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.014495 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.084507 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.084554 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.084590 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-config-data\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.084827 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-scripts\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.084953 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-run-httpd\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.085008 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-log-httpd\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.085155 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.085243 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns4kc\" (UniqueName: \"kubernetes.io/projected/69c52940-14a9-49b1-84ab-40128358ed2d-kube-api-access-ns4kc\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187330 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns4kc\" (UniqueName: \"kubernetes.io/projected/69c52940-14a9-49b1-84ab-40128358ed2d-kube-api-access-ns4kc\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187498 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187567 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-config-data\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187669 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-scripts\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187802 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-run-httpd\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.187850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-log-httpd\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.189012 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-run-httpd\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.190959 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-log-httpd\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.192518 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.193910 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-config-data\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.194791 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.196852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.217162 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-scripts\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.233160 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns4kc\" (UniqueName: \"kubernetes.io/projected/69c52940-14a9-49b1-84ab-40128358ed2d-kube-api-access-ns4kc\") pod \"ceilometer-0\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.307725 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:31:02 crc kubenswrapper[4922]: I0126 14:31:02.790136 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.124172 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7" path="/var/lib/kubelet/pods/5ddfbcda-03f6-48b0-89f6-7fc05b13c9f7/volumes" Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.639500 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerStarted","Data":"43777e9dca52b60fb9fa26474914ee6f56c72d1c342a54279098085c2c98a596"} Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.639804 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerStarted","Data":"4ee4b423625678aeba661b2b6db24205062ec78bcaa918a6be37d1d581da32f9"} Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.639816 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerStarted","Data":"c3cb64148f7f05ea2889ab88044a196ebb4970c0f3bbb4e04e0484c5b021f18c"} Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.948760 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.972968 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 14:31:03 crc kubenswrapper[4922]: I0126 14:31:03.973331 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.000292 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.030360 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.045000 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.654541 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerStarted","Data":"8c579034241cee7c44654189913097803a8ef439c95da2b7c074531f86ff139b"} Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.657920 4922 generic.go:334] "Generic (PLEG): container finished" podID="226a1df4-9c6e-48d7-9c7f-b1d06f797a65" containerID="d222c4e696659287b31a26a1d7a2454dde709d9bdafd05e138d116619c5b2a98" exitCode=0 Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.659130 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j7942" event={"ID":"226a1df4-9c6e-48d7-9c7f-b1d06f797a65","Type":"ContainerDied","Data":"d222c4e696659287b31a26a1d7a2454dde709d9bdafd05e138d116619c5b2a98"} Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.659168 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.659892 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.659983 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 14:31:04 crc kubenswrapper[4922]: I0126 14:31:04.725763 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 14:31:05 crc kubenswrapper[4922]: I0126 14:31:05.668724 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerStarted","Data":"5eeb3efb19cd57deebf7443ca596450dfaea000e305c44816f93e3f3f57c7c81"} Jan 26 14:31:05 crc kubenswrapper[4922]: I0126 14:31:05.691686 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.412306514 podStartE2EDuration="4.691667964s" podCreationTimestamp="2026-01-26 14:31:01 +0000 UTC" firstStartedPulling="2026-01-26 14:31:02.80587577 +0000 UTC m=+1280.008138542" lastFinishedPulling="2026-01-26 14:31:05.08523722 +0000 UTC m=+1282.287499992" observedRunningTime="2026-01-26 14:31:05.688106839 +0000 UTC m=+1282.890369611" watchObservedRunningTime="2026-01-26 14:31:05.691667964 +0000 UTC m=+1282.893930746" Jan 26 14:31:05 crc kubenswrapper[4922]: I0126 14:31:05.931658 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:05 crc kubenswrapper[4922]: I0126 14:31:05.931729 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:05 crc kubenswrapper[4922]: I0126 14:31:05.981212 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:05 crc kubenswrapper[4922]: I0126 14:31:05.992799 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.025464 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.176632 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-combined-ca-bundle\") pod \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.176908 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-scripts\") pod \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.176930 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-config-data\") pod \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.176962 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htlvv\" (UniqueName: \"kubernetes.io/projected/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-kube-api-access-htlvv\") pod \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\" (UID: \"226a1df4-9c6e-48d7-9c7f-b1d06f797a65\") " Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.183223 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-scripts" (OuterVolumeSpecName: "scripts") pod "226a1df4-9c6e-48d7-9c7f-b1d06f797a65" (UID: "226a1df4-9c6e-48d7-9c7f-b1d06f797a65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.192320 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-kube-api-access-htlvv" (OuterVolumeSpecName: "kube-api-access-htlvv") pod "226a1df4-9c6e-48d7-9c7f-b1d06f797a65" (UID: "226a1df4-9c6e-48d7-9c7f-b1d06f797a65"). InnerVolumeSpecName "kube-api-access-htlvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.201659 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "226a1df4-9c6e-48d7-9c7f-b1d06f797a65" (UID: "226a1df4-9c6e-48d7-9c7f-b1d06f797a65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.202008 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-config-data" (OuterVolumeSpecName: "config-data") pod "226a1df4-9c6e-48d7-9c7f-b1d06f797a65" (UID: "226a1df4-9c6e-48d7-9c7f-b1d06f797a65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.280968 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.281008 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.281017 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.281026 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htlvv\" (UniqueName: \"kubernetes.io/projected/226a1df4-9c6e-48d7-9c7f-b1d06f797a65-kube-api-access-htlvv\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.681348 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j7942" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.682544 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j7942" event={"ID":"226a1df4-9c6e-48d7-9c7f-b1d06f797a65","Type":"ContainerDied","Data":"624dfc3bfb8b3d097223c1d530c56e430d51edd697f6f26219bb6df2e113ab41"} Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.682629 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624dfc3bfb8b3d097223c1d530c56e430d51edd697f6f26219bb6df2e113ab41" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.682681 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.682696 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.683250 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.683281 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.683295 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.815469 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 14:31:06 crc kubenswrapper[4922]: E0126 14:31:06.817214 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226a1df4-9c6e-48d7-9c7f-b1d06f797a65" containerName="nova-cell0-conductor-db-sync" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.817236 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="226a1df4-9c6e-48d7-9c7f-b1d06f797a65" containerName="nova-cell0-conductor-db-sync" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.817429 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="226a1df4-9c6e-48d7-9c7f-b1d06f797a65" containerName="nova-cell0-conductor-db-sync" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.818199 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.820502 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.820907 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ljf2f" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.832907 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.855281 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.862754 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.894757 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d39f315-a7ea-4004-a187-649a4ff3846b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.895248 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d39f315-a7ea-4004-a187-649a4ff3846b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.895292 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdsl\" (UniqueName: \"kubernetes.io/projected/9d39f315-a7ea-4004-a187-649a4ff3846b-kube-api-access-5jdsl\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.997337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d39f315-a7ea-4004-a187-649a4ff3846b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.997388 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdsl\" (UniqueName: \"kubernetes.io/projected/9d39f315-a7ea-4004-a187-649a4ff3846b-kube-api-access-5jdsl\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:06 crc kubenswrapper[4922]: I0126 14:31:06.997443 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d39f315-a7ea-4004-a187-649a4ff3846b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:07 crc kubenswrapper[4922]: I0126 14:31:07.013875 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d39f315-a7ea-4004-a187-649a4ff3846b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:07 crc kubenswrapper[4922]: I0126 14:31:07.020595 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdsl\" (UniqueName: \"kubernetes.io/projected/9d39f315-a7ea-4004-a187-649a4ff3846b-kube-api-access-5jdsl\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:07 crc kubenswrapper[4922]: I0126 14:31:07.026629 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d39f315-a7ea-4004-a187-649a4ff3846b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9d39f315-a7ea-4004-a187-649a4ff3846b\") " pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:07 crc kubenswrapper[4922]: I0126 14:31:07.155829 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:07 crc kubenswrapper[4922]: I0126 14:31:07.811318 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.698636 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9d39f315-a7ea-4004-a187-649a4ff3846b","Type":"ContainerStarted","Data":"ea7246bc659f6989baab8c34109aa3eb9b7d31d7167bd2c21d076f52258763f7"} Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.698882 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9d39f315-a7ea-4004-a187-649a4ff3846b","Type":"ContainerStarted","Data":"fdb452e41909944419db3fc1cf8f9d4c546197e61e0f20205ab8b25490446eee"} Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.699348 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.972046 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.972162 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.985415 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 14:31:08 crc kubenswrapper[4922]: I0126 14:31:08.995864 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.99584693 podStartE2EDuration="2.99584693s" podCreationTimestamp="2026-01-26 14:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:08.716428562 +0000 UTC m=+1285.918691334" watchObservedRunningTime="2026-01-26 14:31:08.99584693 +0000 UTC m=+1286.198109702" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.210244 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.715706 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2g2kf"] Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.717145 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.722438 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.722459 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.733090 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2g2kf"] Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.860706 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-config-data\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.860790 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.860836 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-scripts\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.860886 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm2d9\" (UniqueName: \"kubernetes.io/projected/43199278-1695-4fee-a7e2-6ceb2cc304be-kube-api-access-fm2d9\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.898874 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.901327 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.906092 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.915375 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.962411 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-scripts\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.962561 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm2d9\" (UniqueName: \"kubernetes.io/projected/43199278-1695-4fee-a7e2-6ceb2cc304be-kube-api-access-fm2d9\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.962646 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-config-data\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.962717 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.981987 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-config-data\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.982842 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-scripts\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.985893 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:17 crc kubenswrapper[4922]: I0126 14:31:17.996724 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm2d9\" (UniqueName: \"kubernetes.io/projected/43199278-1695-4fee-a7e2-6ceb2cc304be-kube-api-access-fm2d9\") pod \"nova-cell0-cell-mapping-2g2kf\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.025632 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.027288 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.039903 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.076349 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.076602 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.077907 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-config-data\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.077952 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglql\" (UniqueName: \"kubernetes.io/projected/c99d70e4-52c9-4829-a40e-6960888a78dc-kube-api-access-vglql\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.078027 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.078056 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99d70e4-52c9-4829-a40e-6960888a78dc-logs\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.101464 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.102825 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.106449 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.149964 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.170169 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb96c9b4c-rh7c6"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.173151 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.179177 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb96c9b4c-rh7c6"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190278 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190365 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w55gj\" (UniqueName: \"kubernetes.io/projected/bde45419-e8b1-4842-afc2-dcafbeacfa06-kube-api-access-w55gj\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190413 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99d70e4-52c9-4829-a40e-6960888a78dc-logs\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190615 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190660 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-config-data\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vglql\" (UniqueName: \"kubernetes.io/projected/c99d70e4-52c9-4829-a40e-6960888a78dc-kube-api-access-vglql\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190837 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde45419-e8b1-4842-afc2-dcafbeacfa06-logs\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.190952 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-config-data\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.197615 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99d70e4-52c9-4829-a40e-6960888a78dc-logs\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.215598 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-config-data\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.218710 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.248662 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vglql\" (UniqueName: \"kubernetes.io/projected/c99d70e4-52c9-4829-a40e-6960888a78dc-kube-api-access-vglql\") pod \"nova-api-0\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295139 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-config-data\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295482 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-svc\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295611 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w55gj\" (UniqueName: \"kubernetes.io/projected/bde45419-e8b1-4842-afc2-dcafbeacfa06-kube-api-access-w55gj\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295659 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295731 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqkfq\" (UniqueName: \"kubernetes.io/projected/75abcea5-f1b6-4c06-9faa-d82426c4eb99-kube-api-access-bqkfq\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295826 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-config-data\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295845 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-config\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295910 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.295961 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.296054 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.296133 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde45419-e8b1-4842-afc2-dcafbeacfa06-logs\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.296189 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jf9\" (UniqueName: \"kubernetes.io/projected/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-kube-api-access-b4jf9\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.304574 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.304879 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde45419-e8b1-4842-afc2-dcafbeacfa06-logs\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.305833 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-config-data\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.324605 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w55gj\" (UniqueName: \"kubernetes.io/projected/bde45419-e8b1-4842-afc2-dcafbeacfa06-kube-api-access-w55gj\") pod \"nova-metadata-0\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.327984 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.329284 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.332627 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.349376 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.523386 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.663902 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2g2kf"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.722941 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-td682"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.725547 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.728552 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.732013 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.732255 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736077 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736118 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jf9\" (UniqueName: \"kubernetes.io/projected/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-kube-api-access-b4jf9\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736159 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-scripts\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736183 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-svc\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736209 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736243 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736266 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5pd5\" (UniqueName: \"kubernetes.io/projected/716f9795-18d5-4607-9f6c-09295bd2d003-kube-api-access-z5pd5\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqkfq\" (UniqueName: \"kubernetes.io/projected/75abcea5-f1b6-4c06-9faa-d82426c4eb99-kube-api-access-bqkfq\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736318 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736354 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-config-data\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736368 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-config\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736405 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736425 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-config-data\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736459 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.736481 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp9mv\" (UniqueName: \"kubernetes.io/projected/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-kube-api-access-hp9mv\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.737620 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-config\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.739531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-svc\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.740174 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.740648 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.740989 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-config-data\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.745984 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.749953 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.761741 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqkfq\" (UniqueName: \"kubernetes.io/projected/75abcea5-f1b6-4c06-9faa-d82426c4eb99-kube-api-access-bqkfq\") pod \"nova-scheduler-0\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.763895 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-td682"] Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.767692 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jf9\" (UniqueName: \"kubernetes.io/projected/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-kube-api-access-b4jf9\") pod \"dnsmasq-dns-7fb96c9b4c-rh7c6\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837350 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837433 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-config-data\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837483 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp9mv\" (UniqueName: \"kubernetes.io/projected/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-kube-api-access-hp9mv\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837516 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837554 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-scripts\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837594 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.837644 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5pd5\" (UniqueName: \"kubernetes.io/projected/716f9795-18d5-4607-9f6c-09295bd2d003-kube-api-access-z5pd5\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.847931 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.850824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-config-data\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.853438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.853719 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5pd5\" (UniqueName: \"kubernetes.io/projected/716f9795-18d5-4607-9f6c-09295bd2d003-kube-api-access-z5pd5\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.856522 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-scripts\") pod \"nova-cell1-conductor-db-sync-td682\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.859606 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.867301 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp9mv\" (UniqueName: \"kubernetes.io/projected/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-kube-api-access-hp9mv\") pod \"nova-cell1-novncproxy-0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.867723 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.922322 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:18 crc kubenswrapper[4922]: I0126 14:31:18.963554 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:19 crc kubenswrapper[4922]: I0126 14:31:19.053385 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:19 crc kubenswrapper[4922]: I0126 14:31:19.696027 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:19 crc kubenswrapper[4922]: W0126 14:31:19.697796 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc99d70e4_52c9_4829_a40e_6960888a78dc.slice/crio-10f0e63e7b04c0a533b048bda36030d706fcf7d18214860b8fce3e0952fc576b WatchSource:0}: Error finding container 10f0e63e7b04c0a533b048bda36030d706fcf7d18214860b8fce3e0952fc576b: Status 404 returned error can't find the container with id 10f0e63e7b04c0a533b048bda36030d706fcf7d18214860b8fce3e0952fc576b Jan 26 14:31:19 crc kubenswrapper[4922]: I0126 14:31:19.931985 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-td682"] Jan 26 14:31:19 crc kubenswrapper[4922]: I0126 14:31:19.948647 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:19 crc kubenswrapper[4922]: W0126 14:31:19.960194 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75abcea5_f1b6_4c06_9faa_d82426c4eb99.slice/crio-933945a0e3b30af2d28fbf6a3805fdb9ab16b953159a794b3c52025d041cbbe9 WatchSource:0}: Error finding container 933945a0e3b30af2d28fbf6a3805fdb9ab16b953159a794b3c52025d041cbbe9: Status 404 returned error can't find the container with id 933945a0e3b30af2d28fbf6a3805fdb9ab16b953159a794b3c52025d041cbbe9 Jan 26 14:31:19 crc kubenswrapper[4922]: I0126 14:31:19.982733 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:19 crc kubenswrapper[4922]: I0126 14:31:19.998255 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.006657 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb96c9b4c-rh7c6"] Jan 26 14:31:20 crc kubenswrapper[4922]: W0126 14:31:20.010690 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeb4090c_6ab3_4d9b_b406_9c5c1f85c63f.slice/crio-f5cfe739c7830a35f36ace1835859c8a9b16611fbd72a43d47440f6171603fdb WatchSource:0}: Error finding container f5cfe739c7830a35f36ace1835859c8a9b16611fbd72a43d47440f6171603fdb: Status 404 returned error can't find the container with id f5cfe739c7830a35f36ace1835859c8a9b16611fbd72a43d47440f6171603fdb Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.113986 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6a2689d1-5817-40f6-86f8-f6b46ccdabf0","Type":"ContainerStarted","Data":"a7fb4ffe76a08a9ad48b414f38a6ecb9823b34f61be606ac7637e5c7e1e7fd34"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.114872 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" event={"ID":"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f","Type":"ContainerStarted","Data":"f5cfe739c7830a35f36ace1835859c8a9b16611fbd72a43d47440f6171603fdb"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.116901 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c99d70e4-52c9-4829-a40e-6960888a78dc","Type":"ContainerStarted","Data":"10f0e63e7b04c0a533b048bda36030d706fcf7d18214860b8fce3e0952fc576b"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.122991 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bde45419-e8b1-4842-afc2-dcafbeacfa06","Type":"ContainerStarted","Data":"4b40606ca8fa58b0b341d8c2b80f7f1858ff637c5b6d3103cf432f9b5428a777"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.124706 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2g2kf" event={"ID":"43199278-1695-4fee-a7e2-6ceb2cc304be","Type":"ContainerStarted","Data":"ba7bc61144ff64344096775abcbc9dc6bfbd1b8aea09924921c6352fe04f725b"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.124741 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2g2kf" event={"ID":"43199278-1695-4fee-a7e2-6ceb2cc304be","Type":"ContainerStarted","Data":"3b1b7968b960ffad27f100da162d6de888bbded1be5cac3b0948ff6638ff5223"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.126434 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-td682" event={"ID":"716f9795-18d5-4607-9f6c-09295bd2d003","Type":"ContainerStarted","Data":"6eee657696c1786b9cb1f99a114c205937884d0fda6db3b1958bd1a13a72743a"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.127365 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75abcea5-f1b6-4c06-9faa-d82426c4eb99","Type":"ContainerStarted","Data":"933945a0e3b30af2d28fbf6a3805fdb9ab16b953159a794b3c52025d041cbbe9"} Jan 26 14:31:20 crc kubenswrapper[4922]: I0126 14:31:20.155739 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2g2kf" podStartSLOduration=3.155723202 podStartE2EDuration="3.155723202s" podCreationTimestamp="2026-01-26 14:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:20.148787037 +0000 UTC m=+1297.351049809" watchObservedRunningTime="2026-01-26 14:31:20.155723202 +0000 UTC m=+1297.357985974" Jan 26 14:31:21 crc kubenswrapper[4922]: I0126 14:31:21.166446 4922 generic.go:334] "Generic (PLEG): container finished" podID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerID="8fdaaa5f4795d71ccf1e99a64e78356bdafbe2a5432963066243279e8b2d9555" exitCode=0 Jan 26 14:31:21 crc kubenswrapper[4922]: I0126 14:31:21.169623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" event={"ID":"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f","Type":"ContainerDied","Data":"8fdaaa5f4795d71ccf1e99a64e78356bdafbe2a5432963066243279e8b2d9555"} Jan 26 14:31:21 crc kubenswrapper[4922]: I0126 14:31:21.171605 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-td682" event={"ID":"716f9795-18d5-4607-9f6c-09295bd2d003","Type":"ContainerStarted","Data":"21f13c9e7358973c66c52e47437d696be5ec6f44a0acb68e670b89bb9019df73"} Jan 26 14:31:21 crc kubenswrapper[4922]: I0126 14:31:21.225032 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-td682" podStartSLOduration=3.225012381 podStartE2EDuration="3.225012381s" podCreationTimestamp="2026-01-26 14:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:21.217381046 +0000 UTC m=+1298.419643828" watchObservedRunningTime="2026-01-26 14:31:21.225012381 +0000 UTC m=+1298.427275143" Jan 26 14:31:21 crc kubenswrapper[4922]: I0126 14:31:21.724240 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:21 crc kubenswrapper[4922]: I0126 14:31:21.737265 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.203809 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6a2689d1-5817-40f6-86f8-f6b46ccdabf0","Type":"ContainerStarted","Data":"5dab333fea395fae6bb839d8692feca6edbe390ec2feb69cc48f36c424386483"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.204461 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6a2689d1-5817-40f6-86f8-f6b46ccdabf0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5dab333fea395fae6bb839d8692feca6edbe390ec2feb69cc48f36c424386483" gracePeriod=30 Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.206953 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" event={"ID":"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f","Type":"ContainerStarted","Data":"d5301014e3f433953b4ac4ac2c9603d4579c4f6f1fc5ff104e6f21f2c4ae1480"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.207256 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.210370 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c99d70e4-52c9-4829-a40e-6960888a78dc","Type":"ContainerStarted","Data":"2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.210485 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c99d70e4-52c9-4829-a40e-6960888a78dc","Type":"ContainerStarted","Data":"034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.212444 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bde45419-e8b1-4842-afc2-dcafbeacfa06","Type":"ContainerStarted","Data":"7524fe92aaca627bcfa079c6cfda6768d47ab983d9db1d6836f595be8fa7591e"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.212479 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bde45419-e8b1-4842-afc2-dcafbeacfa06","Type":"ContainerStarted","Data":"dd10998a844d8ad16598aec436ead77de53a0fd7bec15ae9cbc34d3f2b6d21ec"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.212535 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-metadata" containerID="cri-o://7524fe92aaca627bcfa079c6cfda6768d47ab983d9db1d6836f595be8fa7591e" gracePeriod=30 Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.212533 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-log" containerID="cri-o://dd10998a844d8ad16598aec436ead77de53a0fd7bec15ae9cbc34d3f2b6d21ec" gracePeriod=30 Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.215324 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75abcea5-f1b6-4c06-9faa-d82426c4eb99","Type":"ContainerStarted","Data":"2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d"} Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.223486 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.656228467 podStartE2EDuration="6.223466774s" podCreationTimestamp="2026-01-26 14:31:18 +0000 UTC" firstStartedPulling="2026-01-26 14:31:19.997650866 +0000 UTC m=+1297.199913638" lastFinishedPulling="2026-01-26 14:31:23.564889143 +0000 UTC m=+1300.767151945" observedRunningTime="2026-01-26 14:31:24.221881392 +0000 UTC m=+1301.424144174" watchObservedRunningTime="2026-01-26 14:31:24.223466774 +0000 UTC m=+1301.425729556" Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.243723 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.662801992 podStartE2EDuration="6.243698816s" podCreationTimestamp="2026-01-26 14:31:18 +0000 UTC" firstStartedPulling="2026-01-26 14:31:19.970817766 +0000 UTC m=+1297.173080538" lastFinishedPulling="2026-01-26 14:31:23.55171456 +0000 UTC m=+1300.753977362" observedRunningTime="2026-01-26 14:31:24.242380221 +0000 UTC m=+1301.444643013" watchObservedRunningTime="2026-01-26 14:31:24.243698816 +0000 UTC m=+1301.445961618" Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.264450 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.662139966 podStartE2EDuration="7.264427502s" podCreationTimestamp="2026-01-26 14:31:17 +0000 UTC" firstStartedPulling="2026-01-26 14:31:19.960049338 +0000 UTC m=+1297.162312110" lastFinishedPulling="2026-01-26 14:31:23.562336864 +0000 UTC m=+1300.764599646" observedRunningTime="2026-01-26 14:31:24.259180911 +0000 UTC m=+1301.461443683" watchObservedRunningTime="2026-01-26 14:31:24.264427502 +0000 UTC m=+1301.466690274" Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.288637 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" podStartSLOduration=6.28861784 podStartE2EDuration="6.28861784s" podCreationTimestamp="2026-01-26 14:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:24.277009209 +0000 UTC m=+1301.479271981" watchObservedRunningTime="2026-01-26 14:31:24.28861784 +0000 UTC m=+1301.490880612" Jan 26 14:31:24 crc kubenswrapper[4922]: I0126 14:31:24.296057 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.481611136 podStartE2EDuration="7.296041679s" podCreationTimestamp="2026-01-26 14:31:17 +0000 UTC" firstStartedPulling="2026-01-26 14:31:19.704831737 +0000 UTC m=+1296.907094509" lastFinishedPulling="2026-01-26 14:31:23.51926228 +0000 UTC m=+1300.721525052" observedRunningTime="2026-01-26 14:31:24.29424305 +0000 UTC m=+1301.496505822" watchObservedRunningTime="2026-01-26 14:31:24.296041679 +0000 UTC m=+1301.498304451" Jan 26 14:31:25 crc kubenswrapper[4922]: I0126 14:31:25.229794 4922 generic.go:334] "Generic (PLEG): container finished" podID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerID="dd10998a844d8ad16598aec436ead77de53a0fd7bec15ae9cbc34d3f2b6d21ec" exitCode=143 Jan 26 14:31:25 crc kubenswrapper[4922]: I0126 14:31:25.229931 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bde45419-e8b1-4842-afc2-dcafbeacfa06","Type":"ContainerDied","Data":"dd10998a844d8ad16598aec436ead77de53a0fd7bec15ae9cbc34d3f2b6d21ec"} Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.266294 4922 generic.go:334] "Generic (PLEG): container finished" podID="43199278-1695-4fee-a7e2-6ceb2cc304be" containerID="ba7bc61144ff64344096775abcbc9dc6bfbd1b8aea09924921c6352fe04f725b" exitCode=0 Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.266659 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2g2kf" event={"ID":"43199278-1695-4fee-a7e2-6ceb2cc304be","Type":"ContainerDied","Data":"ba7bc61144ff64344096775abcbc9dc6bfbd1b8aea09924921c6352fe04f725b"} Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.525353 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.525428 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.732520 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.732616 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.868250 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.868507 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.916251 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.924507 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:31:28 crc kubenswrapper[4922]: I0126 14:31:28.964634 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.018517 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7759df7475-j7d9x"] Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.019234 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerName="dnsmasq-dns" containerID="cri-o://cab7de368f7660601c39d19f3f6e9d601847d598912537df722d070df6316366" gracePeriod=10 Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.279570 4922 generic.go:334] "Generic (PLEG): container finished" podID="716f9795-18d5-4607-9f6c-09295bd2d003" containerID="21f13c9e7358973c66c52e47437d696be5ec6f44a0acb68e670b89bb9019df73" exitCode=0 Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.279634 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-td682" event={"ID":"716f9795-18d5-4607-9f6c-09295bd2d003","Type":"ContainerDied","Data":"21f13c9e7358973c66c52e47437d696be5ec6f44a0acb68e670b89bb9019df73"} Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.283326 4922 generic.go:334] "Generic (PLEG): container finished" podID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerID="cab7de368f7660601c39d19f3f6e9d601847d598912537df722d070df6316366" exitCode=0 Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.283417 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" event={"ID":"65b63bfa-549a-4eb1-977a-90b1e119bd9e","Type":"ContainerDied","Data":"cab7de368f7660601c39d19f3f6e9d601847d598912537df722d070df6316366"} Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.330281 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.591833 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.610184 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.610249 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.745616 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-swift-storage-0\") pod \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.746097 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-sb\") pod \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.746163 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-config\") pod \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.746232 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-svc\") pod \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.746275 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-nb\") pod \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.746325 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsl7m\" (UniqueName: \"kubernetes.io/projected/65b63bfa-549a-4eb1-977a-90b1e119bd9e-kube-api-access-wsl7m\") pod \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\" (UID: \"65b63bfa-549a-4eb1-977a-90b1e119bd9e\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.770985 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b63bfa-549a-4eb1-977a-90b1e119bd9e-kube-api-access-wsl7m" (OuterVolumeSpecName: "kube-api-access-wsl7m") pod "65b63bfa-549a-4eb1-977a-90b1e119bd9e" (UID: "65b63bfa-549a-4eb1-977a-90b1e119bd9e"). InnerVolumeSpecName "kube-api-access-wsl7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.800379 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "65b63bfa-549a-4eb1-977a-90b1e119bd9e" (UID: "65b63bfa-549a-4eb1-977a-90b1e119bd9e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.804388 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.820907 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-config" (OuterVolumeSpecName: "config") pod "65b63bfa-549a-4eb1-977a-90b1e119bd9e" (UID: "65b63bfa-549a-4eb1-977a-90b1e119bd9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.828623 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65b63bfa-549a-4eb1-977a-90b1e119bd9e" (UID: "65b63bfa-549a-4eb1-977a-90b1e119bd9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.833942 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "65b63bfa-549a-4eb1-977a-90b1e119bd9e" (UID: "65b63bfa-549a-4eb1-977a-90b1e119bd9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.836035 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65b63bfa-549a-4eb1-977a-90b1e119bd9e" (UID: "65b63bfa-549a-4eb1-977a-90b1e119bd9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.848998 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.849032 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.849044 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.849054 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsl7m\" (UniqueName: \"kubernetes.io/projected/65b63bfa-549a-4eb1-977a-90b1e119bd9e-kube-api-access-wsl7m\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.849107 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.849116 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65b63bfa-549a-4eb1-977a-90b1e119bd9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.950085 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-config-data\") pod \"43199278-1695-4fee-a7e2-6ceb2cc304be\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.950189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm2d9\" (UniqueName: \"kubernetes.io/projected/43199278-1695-4fee-a7e2-6ceb2cc304be-kube-api-access-fm2d9\") pod \"43199278-1695-4fee-a7e2-6ceb2cc304be\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.950216 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-scripts\") pod \"43199278-1695-4fee-a7e2-6ceb2cc304be\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.950338 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-combined-ca-bundle\") pod \"43199278-1695-4fee-a7e2-6ceb2cc304be\" (UID: \"43199278-1695-4fee-a7e2-6ceb2cc304be\") " Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.953666 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-scripts" (OuterVolumeSpecName: "scripts") pod "43199278-1695-4fee-a7e2-6ceb2cc304be" (UID: "43199278-1695-4fee-a7e2-6ceb2cc304be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.954330 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43199278-1695-4fee-a7e2-6ceb2cc304be-kube-api-access-fm2d9" (OuterVolumeSpecName: "kube-api-access-fm2d9") pod "43199278-1695-4fee-a7e2-6ceb2cc304be" (UID: "43199278-1695-4fee-a7e2-6ceb2cc304be"). InnerVolumeSpecName "kube-api-access-fm2d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.986111 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43199278-1695-4fee-a7e2-6ceb2cc304be" (UID: "43199278-1695-4fee-a7e2-6ceb2cc304be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:29 crc kubenswrapper[4922]: I0126 14:31:29.998740 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-config-data" (OuterVolumeSpecName: "config-data") pod "43199278-1695-4fee-a7e2-6ceb2cc304be" (UID: "43199278-1695-4fee-a7e2-6ceb2cc304be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.053251 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.053522 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm2d9\" (UniqueName: \"kubernetes.io/projected/43199278-1695-4fee-a7e2-6ceb2cc304be-kube-api-access-fm2d9\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.053640 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.053725 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43199278-1695-4fee-a7e2-6ceb2cc304be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.293272 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.293279 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7759df7475-j7d9x" event={"ID":"65b63bfa-549a-4eb1-977a-90b1e119bd9e","Type":"ContainerDied","Data":"eb52340f1b4eb79829a5e6c1386b1b83e96803a64ed8a99458da44232a07fdfa"} Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.293363 4922 scope.go:117] "RemoveContainer" containerID="cab7de368f7660601c39d19f3f6e9d601847d598912537df722d070df6316366" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.327847 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2g2kf" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.328351 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2g2kf" event={"ID":"43199278-1695-4fee-a7e2-6ceb2cc304be","Type":"ContainerDied","Data":"3b1b7968b960ffad27f100da162d6de888bbded1be5cac3b0948ff6638ff5223"} Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.328394 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1b7968b960ffad27f100da162d6de888bbded1be5cac3b0948ff6638ff5223" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.328301 4922 scope.go:117] "RemoveContainer" containerID="278eecfd1f8c2211e90c826fed7122f534fc221352ac912e81883f9972117304" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.338122 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7759df7475-j7d9x"] Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.345551 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7759df7475-j7d9x"] Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.479144 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.479755 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-api" containerID="cri-o://2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd" gracePeriod=30 Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.479687 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-log" containerID="cri-o://034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49" gracePeriod=30 Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.515478 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.735752 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.867187 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-scripts\") pod \"716f9795-18d5-4607-9f6c-09295bd2d003\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.867254 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-config-data\") pod \"716f9795-18d5-4607-9f6c-09295bd2d003\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.867290 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5pd5\" (UniqueName: \"kubernetes.io/projected/716f9795-18d5-4607-9f6c-09295bd2d003-kube-api-access-z5pd5\") pod \"716f9795-18d5-4607-9f6c-09295bd2d003\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.867340 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-combined-ca-bundle\") pod \"716f9795-18d5-4607-9f6c-09295bd2d003\" (UID: \"716f9795-18d5-4607-9f6c-09295bd2d003\") " Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.872206 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716f9795-18d5-4607-9f6c-09295bd2d003-kube-api-access-z5pd5" (OuterVolumeSpecName: "kube-api-access-z5pd5") pod "716f9795-18d5-4607-9f6c-09295bd2d003" (UID: "716f9795-18d5-4607-9f6c-09295bd2d003"). InnerVolumeSpecName "kube-api-access-z5pd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.874206 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-scripts" (OuterVolumeSpecName: "scripts") pod "716f9795-18d5-4607-9f6c-09295bd2d003" (UID: "716f9795-18d5-4607-9f6c-09295bd2d003"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.898494 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-config-data" (OuterVolumeSpecName: "config-data") pod "716f9795-18d5-4607-9f6c-09295bd2d003" (UID: "716f9795-18d5-4607-9f6c-09295bd2d003"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.901630 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "716f9795-18d5-4607-9f6c-09295bd2d003" (UID: "716f9795-18d5-4607-9f6c-09295bd2d003"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.970119 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.970196 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.970222 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5pd5\" (UniqueName: \"kubernetes.io/projected/716f9795-18d5-4607-9f6c-09295bd2d003-kube-api-access-z5pd5\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:30 crc kubenswrapper[4922]: I0126 14:31:30.970243 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/716f9795-18d5-4607-9f6c-09295bd2d003-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.109657 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" path="/var/lib/kubelet/pods/65b63bfa-549a-4eb1-977a-90b1e119bd9e/volumes" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.339373 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-td682" event={"ID":"716f9795-18d5-4607-9f6c-09295bd2d003","Type":"ContainerDied","Data":"6eee657696c1786b9cb1f99a114c205937884d0fda6db3b1958bd1a13a72743a"} Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.339405 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-td682" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.339419 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eee657696c1786b9cb1f99a114c205937884d0fda6db3b1958bd1a13a72743a" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.343444 4922 generic.go:334] "Generic (PLEG): container finished" podID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerID="034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49" exitCode=143 Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.343526 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c99d70e4-52c9-4829-a40e-6960888a78dc","Type":"ContainerDied","Data":"034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49"} Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.382255 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 14:31:31 crc kubenswrapper[4922]: E0126 14:31:31.382979 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerName="init" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383059 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerName="init" Jan 26 14:31:31 crc kubenswrapper[4922]: E0126 14:31:31.383159 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerName="dnsmasq-dns" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383215 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerName="dnsmasq-dns" Jan 26 14:31:31 crc kubenswrapper[4922]: E0126 14:31:31.383282 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716f9795-18d5-4607-9f6c-09295bd2d003" containerName="nova-cell1-conductor-db-sync" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383341 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="716f9795-18d5-4607-9f6c-09295bd2d003" containerName="nova-cell1-conductor-db-sync" Jan 26 14:31:31 crc kubenswrapper[4922]: E0126 14:31:31.383401 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43199278-1695-4fee-a7e2-6ceb2cc304be" containerName="nova-manage" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383462 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="43199278-1695-4fee-a7e2-6ceb2cc304be" containerName="nova-manage" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383693 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b63bfa-549a-4eb1-977a-90b1e119bd9e" containerName="dnsmasq-dns" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383765 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="43199278-1695-4fee-a7e2-6ceb2cc304be" containerName="nova-manage" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.383850 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="716f9795-18d5-4607-9f6c-09295bd2d003" containerName="nova-cell1-conductor-db-sync" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.384627 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.389014 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.390197 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.485698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8407196-4ae4-4db4-95d5-3498f9503f5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.485748 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8407196-4ae4-4db4-95d5-3498f9503f5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.485840 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7hh\" (UniqueName: \"kubernetes.io/projected/b8407196-4ae4-4db4-95d5-3498f9503f5e-kube-api-access-7r7hh\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.586907 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8407196-4ae4-4db4-95d5-3498f9503f5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.586963 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8407196-4ae4-4db4-95d5-3498f9503f5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.587035 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7hh\" (UniqueName: \"kubernetes.io/projected/b8407196-4ae4-4db4-95d5-3498f9503f5e-kube-api-access-7r7hh\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.591692 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8407196-4ae4-4db4-95d5-3498f9503f5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.604743 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8407196-4ae4-4db4-95d5-3498f9503f5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.634579 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7hh\" (UniqueName: \"kubernetes.io/projected/b8407196-4ae4-4db4-95d5-3498f9503f5e-kube-api-access-7r7hh\") pod \"nova-cell1-conductor-0\" (UID: \"b8407196-4ae4-4db4-95d5-3498f9503f5e\") " pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:31 crc kubenswrapper[4922]: I0126 14:31:31.706852 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:32 crc kubenswrapper[4922]: I0126 14:31:32.147102 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 14:31:32 crc kubenswrapper[4922]: I0126 14:31:32.318768 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 14:31:32 crc kubenswrapper[4922]: I0126 14:31:32.381457 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" containerName="nova-scheduler-scheduler" containerID="cri-o://2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" gracePeriod=30 Jan 26 14:31:32 crc kubenswrapper[4922]: I0126 14:31:32.381764 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b8407196-4ae4-4db4-95d5-3498f9503f5e","Type":"ContainerStarted","Data":"0c664750c6083710426098fb747fd3321016542117c17d4ec8b5ceab78033ab3"} Jan 26 14:31:33 crc kubenswrapper[4922]: I0126 14:31:33.398660 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b8407196-4ae4-4db4-95d5-3498f9503f5e","Type":"ContainerStarted","Data":"57a6f31f9068ba593b324d0b983334cb1d2b36403d1678f60ccb6989916ede4d"} Jan 26 14:31:33 crc kubenswrapper[4922]: I0126 14:31:33.402504 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:33 crc kubenswrapper[4922]: I0126 14:31:33.432682 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.432656085 podStartE2EDuration="2.432656085s" podCreationTimestamp="2026-01-26 14:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:33.415377091 +0000 UTC m=+1310.617639863" watchObservedRunningTime="2026-01-26 14:31:33.432656085 +0000 UTC m=+1310.634918887" Jan 26 14:31:33 crc kubenswrapper[4922]: E0126 14:31:33.870655 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 14:31:33 crc kubenswrapper[4922]: E0126 14:31:33.872643 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 14:31:33 crc kubenswrapper[4922]: E0126 14:31:33.873950 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 14:31:33 crc kubenswrapper[4922]: E0126 14:31:33.874016 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" containerName="nova-scheduler-scheduler" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.178265 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.343193 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99d70e4-52c9-4829-a40e-6960888a78dc-logs\") pod \"c99d70e4-52c9-4829-a40e-6960888a78dc\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.343597 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-config-data\") pod \"c99d70e4-52c9-4829-a40e-6960888a78dc\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.344095 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99d70e4-52c9-4829-a40e-6960888a78dc-logs" (OuterVolumeSpecName: "logs") pod "c99d70e4-52c9-4829-a40e-6960888a78dc" (UID: "c99d70e4-52c9-4829-a40e-6960888a78dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.344447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vglql\" (UniqueName: \"kubernetes.io/projected/c99d70e4-52c9-4829-a40e-6960888a78dc-kube-api-access-vglql\") pod \"c99d70e4-52c9-4829-a40e-6960888a78dc\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.344865 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-combined-ca-bundle\") pod \"c99d70e4-52c9-4829-a40e-6960888a78dc\" (UID: \"c99d70e4-52c9-4829-a40e-6960888a78dc\") " Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.345405 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99d70e4-52c9-4829-a40e-6960888a78dc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.356325 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99d70e4-52c9-4829-a40e-6960888a78dc-kube-api-access-vglql" (OuterVolumeSpecName: "kube-api-access-vglql") pod "c99d70e4-52c9-4829-a40e-6960888a78dc" (UID: "c99d70e4-52c9-4829-a40e-6960888a78dc"). InnerVolumeSpecName "kube-api-access-vglql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.376621 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-config-data" (OuterVolumeSpecName: "config-data") pod "c99d70e4-52c9-4829-a40e-6960888a78dc" (UID: "c99d70e4-52c9-4829-a40e-6960888a78dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.402708 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c99d70e4-52c9-4829-a40e-6960888a78dc" (UID: "c99d70e4-52c9-4829-a40e-6960888a78dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.420267 4922 generic.go:334] "Generic (PLEG): container finished" podID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerID="2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd" exitCode=0 Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.421102 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.421221 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c99d70e4-52c9-4829-a40e-6960888a78dc","Type":"ContainerDied","Data":"2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd"} Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.421290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c99d70e4-52c9-4829-a40e-6960888a78dc","Type":"ContainerDied","Data":"10f0e63e7b04c0a533b048bda36030d706fcf7d18214860b8fce3e0952fc576b"} Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.421315 4922 scope.go:117] "RemoveContainer" containerID="2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.447085 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.447156 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vglql\" (UniqueName: \"kubernetes.io/projected/c99d70e4-52c9-4829-a40e-6960888a78dc-kube-api-access-vglql\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.447175 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d70e4-52c9-4829-a40e-6960888a78dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.477508 4922 scope.go:117] "RemoveContainer" containerID="034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.481173 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.495750 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.509702 4922 scope.go:117] "RemoveContainer" containerID="2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.510782 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:34 crc kubenswrapper[4922]: E0126 14:31:34.510906 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd\": container with ID starting with 2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd not found: ID does not exist" containerID="2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.510947 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd"} err="failed to get container status \"2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd\": rpc error: code = NotFound desc = could not find container \"2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd\": container with ID starting with 2f1e09f6418e248ed854425cec811f88697153e723780c1002633430307e37bd not found: ID does not exist" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.510972 4922 scope.go:117] "RemoveContainer" containerID="034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49" Jan 26 14:31:34 crc kubenswrapper[4922]: E0126 14:31:34.511349 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-log" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.511370 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-log" Jan 26 14:31:34 crc kubenswrapper[4922]: E0126 14:31:34.511383 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49\": container with ID starting with 034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49 not found: ID does not exist" containerID="034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.511410 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49"} err="failed to get container status \"034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49\": rpc error: code = NotFound desc = could not find container \"034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49\": container with ID starting with 034ace7a52db009a8fd6d0bf011de739b00534036b2512ba90a283dada705d49 not found: ID does not exist" Jan 26 14:31:34 crc kubenswrapper[4922]: E0126 14:31:34.511396 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-api" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.511433 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-api" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.511814 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-log" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.511841 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" containerName="nova-api-api" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.512942 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.515009 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.519712 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.652949 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.653082 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592c0fd8-0a3a-410a-a26c-5f91d54365c6-logs\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.653104 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kjrm\" (UniqueName: \"kubernetes.io/projected/592c0fd8-0a3a-410a-a26c-5f91d54365c6-kube-api-access-9kjrm\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.653132 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-config-data\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.755375 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.755456 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592c0fd8-0a3a-410a-a26c-5f91d54365c6-logs\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.755486 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kjrm\" (UniqueName: \"kubernetes.io/projected/592c0fd8-0a3a-410a-a26c-5f91d54365c6-kube-api-access-9kjrm\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.755522 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-config-data\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.755874 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592c0fd8-0a3a-410a-a26c-5f91d54365c6-logs\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.758869 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.759673 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-config-data\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.772712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kjrm\" (UniqueName: \"kubernetes.io/projected/592c0fd8-0a3a-410a-a26c-5f91d54365c6-kube-api-access-9kjrm\") pod \"nova-api-0\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " pod="openstack/nova-api-0" Jan 26 14:31:34 crc kubenswrapper[4922]: I0126 14:31:34.832401 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:31:35 crc kubenswrapper[4922]: I0126 14:31:35.117377 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99d70e4-52c9-4829-a40e-6960888a78dc" path="/var/lib/kubelet/pods/c99d70e4-52c9-4829-a40e-6960888a78dc/volumes" Jan 26 14:31:35 crc kubenswrapper[4922]: W0126 14:31:35.317220 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod592c0fd8_0a3a_410a_a26c_5f91d54365c6.slice/crio-00045a88a4453ba490cb91b2a39d4c3687f3a55f499074d86b5a247a9b3ee93e WatchSource:0}: Error finding container 00045a88a4453ba490cb91b2a39d4c3687f3a55f499074d86b5a247a9b3ee93e: Status 404 returned error can't find the container with id 00045a88a4453ba490cb91b2a39d4c3687f3a55f499074d86b5a247a9b3ee93e Jan 26 14:31:35 crc kubenswrapper[4922]: I0126 14:31:35.321765 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:35 crc kubenswrapper[4922]: I0126 14:31:35.433494 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"592c0fd8-0a3a-410a-a26c-5f91d54365c6","Type":"ContainerStarted","Data":"00045a88a4453ba490cb91b2a39d4c3687f3a55f499074d86b5a247a9b3ee93e"} Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.021009 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.182850 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-config-data\") pod \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.182900 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqkfq\" (UniqueName: \"kubernetes.io/projected/75abcea5-f1b6-4c06-9faa-d82426c4eb99-kube-api-access-bqkfq\") pod \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.182977 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-combined-ca-bundle\") pod \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\" (UID: \"75abcea5-f1b6-4c06-9faa-d82426c4eb99\") " Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.188185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75abcea5-f1b6-4c06-9faa-d82426c4eb99-kube-api-access-bqkfq" (OuterVolumeSpecName: "kube-api-access-bqkfq") pod "75abcea5-f1b6-4c06-9faa-d82426c4eb99" (UID: "75abcea5-f1b6-4c06-9faa-d82426c4eb99"). InnerVolumeSpecName "kube-api-access-bqkfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.216397 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-config-data" (OuterVolumeSpecName: "config-data") pod "75abcea5-f1b6-4c06-9faa-d82426c4eb99" (UID: "75abcea5-f1b6-4c06-9faa-d82426c4eb99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.228834 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75abcea5-f1b6-4c06-9faa-d82426c4eb99" (UID: "75abcea5-f1b6-4c06-9faa-d82426c4eb99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.285512 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.285557 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqkfq\" (UniqueName: \"kubernetes.io/projected/75abcea5-f1b6-4c06-9faa-d82426c4eb99-kube-api-access-bqkfq\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.285574 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75abcea5-f1b6-4c06-9faa-d82426c4eb99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.450609 4922 generic.go:334] "Generic (PLEG): container finished" podID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" exitCode=0 Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.451041 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75abcea5-f1b6-4c06-9faa-d82426c4eb99","Type":"ContainerDied","Data":"2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d"} Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.451129 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"75abcea5-f1b6-4c06-9faa-d82426c4eb99","Type":"ContainerDied","Data":"933945a0e3b30af2d28fbf6a3805fdb9ab16b953159a794b3c52025d041cbbe9"} Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.451161 4922 scope.go:117] "RemoveContainer" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.451362 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.454542 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"592c0fd8-0a3a-410a-a26c-5f91d54365c6","Type":"ContainerStarted","Data":"0fa5f3f9824e862f1a4d3584c440e5c38d9907777bbddcfa4433de4e405adcd8"} Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.454596 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"592c0fd8-0a3a-410a-a26c-5f91d54365c6","Type":"ContainerStarted","Data":"ef52a3192a2a409e7d785294151c4d4b2d4030227b07bf49b6dd2ed5e2289a88"} Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.478228 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.47820975 podStartE2EDuration="2.47820975s" podCreationTimestamp="2026-01-26 14:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:36.475565509 +0000 UTC m=+1313.677828291" watchObservedRunningTime="2026-01-26 14:31:36.47820975 +0000 UTC m=+1313.680472522" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.479258 4922 scope.go:117] "RemoveContainer" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" Jan 26 14:31:36 crc kubenswrapper[4922]: E0126 14:31:36.479810 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d\": container with ID starting with 2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d not found: ID does not exist" containerID="2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.479864 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d"} err="failed to get container status \"2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d\": rpc error: code = NotFound desc = could not find container \"2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d\": container with ID starting with 2218da3e251596fa15f8ce0b697ad33ea08abf8ba56285c2b0249b27a96db05d not found: ID does not exist" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.550906 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.551078 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.570341 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:36 crc kubenswrapper[4922]: E0126 14:31:36.570756 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" containerName="nova-scheduler-scheduler" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.570779 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" containerName="nova-scheduler-scheduler" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.571006 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" containerName="nova-scheduler-scheduler" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.571711 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.571789 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.579483 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.693621 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m92h\" (UniqueName: \"kubernetes.io/projected/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-kube-api-access-5m92h\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.693685 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.693738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-config-data\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.796390 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m92h\" (UniqueName: \"kubernetes.io/projected/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-kube-api-access-5m92h\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.796444 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.796491 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-config-data\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.803586 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-config-data\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.809544 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.815559 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m92h\" (UniqueName: \"kubernetes.io/projected/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-kube-api-access-5m92h\") pod \"nova-scheduler-0\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " pod="openstack/nova-scheduler-0" Jan 26 14:31:36 crc kubenswrapper[4922]: I0126 14:31:36.901774 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:31:37 crc kubenswrapper[4922]: I0126 14:31:37.104128 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75abcea5-f1b6-4c06-9faa-d82426c4eb99" path="/var/lib/kubelet/pods/75abcea5-f1b6-4c06-9faa-d82426c4eb99/volumes" Jan 26 14:31:37 crc kubenswrapper[4922]: I0126 14:31:37.391551 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:31:37 crc kubenswrapper[4922]: I0126 14:31:37.467982 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfad7c05-9d6a-4435-b019-731d7e1c5ec3","Type":"ContainerStarted","Data":"bcd0c133f3196f8188be575d30c0798e479734d2e25cee70abbdb0ca0248a710"} Jan 26 14:31:38 crc kubenswrapper[4922]: I0126 14:31:38.479174 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfad7c05-9d6a-4435-b019-731d7e1c5ec3","Type":"ContainerStarted","Data":"aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f"} Jan 26 14:31:38 crc kubenswrapper[4922]: I0126 14:31:38.520349 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.520331712 podStartE2EDuration="2.520331712s" podCreationTimestamp="2026-01-26 14:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:38.510097658 +0000 UTC m=+1315.712360440" watchObservedRunningTime="2026-01-26 14:31:38.520331712 +0000 UTC m=+1315.722594494" Jan 26 14:31:41 crc kubenswrapper[4922]: I0126 14:31:41.307260 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:31:41 crc kubenswrapper[4922]: I0126 14:31:41.307644 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:31:41 crc kubenswrapper[4922]: I0126 14:31:41.762399 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 14:31:41 crc kubenswrapper[4922]: I0126 14:31:41.902209 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 14:31:44 crc kubenswrapper[4922]: I0126 14:31:44.834785 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:31:44 crc kubenswrapper[4922]: I0126 14:31:44.835616 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:31:45 crc kubenswrapper[4922]: I0126 14:31:45.917210 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:31:45 crc kubenswrapper[4922]: I0126 14:31:45.917250 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:31:46 crc kubenswrapper[4922]: I0126 14:31:46.902418 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 14:31:46 crc kubenswrapper[4922]: I0126 14:31:46.950299 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 14:31:47 crc kubenswrapper[4922]: I0126 14:31:47.615950 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.661347 4922 generic.go:334] "Generic (PLEG): container finished" podID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerID="7524fe92aaca627bcfa079c6cfda6768d47ab983d9db1d6836f595be8fa7591e" exitCode=137 Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.661440 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bde45419-e8b1-4842-afc2-dcafbeacfa06","Type":"ContainerDied","Data":"7524fe92aaca627bcfa079c6cfda6768d47ab983d9db1d6836f595be8fa7591e"} Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.663958 4922 generic.go:334] "Generic (PLEG): container finished" podID="6a2689d1-5817-40f6-86f8-f6b46ccdabf0" containerID="5dab333fea395fae6bb839d8692feca6edbe390ec2feb69cc48f36c424386483" exitCode=137 Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.664006 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6a2689d1-5817-40f6-86f8-f6b46ccdabf0","Type":"ContainerDied","Data":"5dab333fea395fae6bb839d8692feca6edbe390ec2feb69cc48f36c424386483"} Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.847566 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.848254 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.851478 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 14:31:54 crc kubenswrapper[4922]: I0126 14:31:54.858920 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.222706 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.231527 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299248 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp9mv\" (UniqueName: \"kubernetes.io/projected/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-kube-api-access-hp9mv\") pod \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299359 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-combined-ca-bundle\") pod \"bde45419-e8b1-4842-afc2-dcafbeacfa06\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299430 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-config-data\") pod \"bde45419-e8b1-4842-afc2-dcafbeacfa06\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299460 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-combined-ca-bundle\") pod \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w55gj\" (UniqueName: \"kubernetes.io/projected/bde45419-e8b1-4842-afc2-dcafbeacfa06-kube-api-access-w55gj\") pod \"bde45419-e8b1-4842-afc2-dcafbeacfa06\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299618 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde45419-e8b1-4842-afc2-dcafbeacfa06-logs\") pod \"bde45419-e8b1-4842-afc2-dcafbeacfa06\" (UID: \"bde45419-e8b1-4842-afc2-dcafbeacfa06\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.299695 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-config-data\") pod \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\" (UID: \"6a2689d1-5817-40f6-86f8-f6b46ccdabf0\") " Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.300260 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde45419-e8b1-4842-afc2-dcafbeacfa06-logs" (OuterVolumeSpecName: "logs") pod "bde45419-e8b1-4842-afc2-dcafbeacfa06" (UID: "bde45419-e8b1-4842-afc2-dcafbeacfa06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.306076 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-kube-api-access-hp9mv" (OuterVolumeSpecName: "kube-api-access-hp9mv") pod "6a2689d1-5817-40f6-86f8-f6b46ccdabf0" (UID: "6a2689d1-5817-40f6-86f8-f6b46ccdabf0"). InnerVolumeSpecName "kube-api-access-hp9mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.306193 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde45419-e8b1-4842-afc2-dcafbeacfa06-kube-api-access-w55gj" (OuterVolumeSpecName: "kube-api-access-w55gj") pod "bde45419-e8b1-4842-afc2-dcafbeacfa06" (UID: "bde45419-e8b1-4842-afc2-dcafbeacfa06"). InnerVolumeSpecName "kube-api-access-w55gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.327596 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a2689d1-5817-40f6-86f8-f6b46ccdabf0" (UID: "6a2689d1-5817-40f6-86f8-f6b46ccdabf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.336617 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-config-data" (OuterVolumeSpecName: "config-data") pod "6a2689d1-5817-40f6-86f8-f6b46ccdabf0" (UID: "6a2689d1-5817-40f6-86f8-f6b46ccdabf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.340492 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bde45419-e8b1-4842-afc2-dcafbeacfa06" (UID: "bde45419-e8b1-4842-afc2-dcafbeacfa06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.342676 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-config-data" (OuterVolumeSpecName: "config-data") pod "bde45419-e8b1-4842-afc2-dcafbeacfa06" (UID: "bde45419-e8b1-4842-afc2-dcafbeacfa06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.401705 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde45419-e8b1-4842-afc2-dcafbeacfa06-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.401882 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.401945 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp9mv\" (UniqueName: \"kubernetes.io/projected/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-kube-api-access-hp9mv\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.402014 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.402118 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde45419-e8b1-4842-afc2-dcafbeacfa06-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.402191 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2689d1-5817-40f6-86f8-f6b46ccdabf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.402282 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w55gj\" (UniqueName: \"kubernetes.io/projected/bde45419-e8b1-4842-afc2-dcafbeacfa06-kube-api-access-w55gj\") on node \"crc\" DevicePath \"\"" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.687203 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6a2689d1-5817-40f6-86f8-f6b46ccdabf0","Type":"ContainerDied","Data":"a7fb4ffe76a08a9ad48b414f38a6ecb9823b34f61be606ac7637e5c7e1e7fd34"} Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.687523 4922 scope.go:117] "RemoveContainer" containerID="5dab333fea395fae6bb839d8692feca6edbe390ec2feb69cc48f36c424386483" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.687275 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.693386 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.693469 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bde45419-e8b1-4842-afc2-dcafbeacfa06","Type":"ContainerDied","Data":"4b40606ca8fa58b0b341d8c2b80f7f1858ff637c5b6d3103cf432f9b5428a777"} Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.693987 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.707939 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.724189 4922 scope.go:117] "RemoveContainer" containerID="7524fe92aaca627bcfa079c6cfda6768d47ab983d9db1d6836f595be8fa7591e" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.767414 4922 scope.go:117] "RemoveContainer" containerID="dd10998a844d8ad16598aec436ead77de53a0fd7bec15ae9cbc34d3f2b6d21ec" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.773340 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.785028 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.795643 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.807298 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818169 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: E0126 14:31:55.818619 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-log" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818637 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-log" Jan 26 14:31:55 crc kubenswrapper[4922]: E0126 14:31:55.818665 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2689d1-5817-40f6-86f8-f6b46ccdabf0" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818671 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2689d1-5817-40f6-86f8-f6b46ccdabf0" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 14:31:55 crc kubenswrapper[4922]: E0126 14:31:55.818683 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-metadata" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818692 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-metadata" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818899 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-log" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818912 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" containerName="nova-metadata-metadata" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.818930 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2689d1-5817-40f6-86f8-f6b46ccdabf0" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.820171 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.828619 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.828800 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.831284 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.840470 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.843523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.848084 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.848309 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.854237 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.854671 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.913525 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jwf\" (UniqueName: \"kubernetes.io/projected/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-kube-api-access-85jwf\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.913588 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-logs\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.913619 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-config-data\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.913727 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxkv\" (UniqueName: \"kubernetes.io/projected/f83d6fb5-2b7a-4982-9719-3a03aa125f00-kube-api-access-2xxkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.913765 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.913916 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.914007 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.914037 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.914088 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.914124 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.929821 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55556d9745-knxv2"] Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.931382 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:55 crc kubenswrapper[4922]: I0126 14:31:55.937543 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55556d9745-knxv2"] Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016306 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-logs\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016367 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-config-data\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016394 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xxkv\" (UniqueName: \"kubernetes.io/projected/f83d6fb5-2b7a-4982-9719-3a03aa125f00-kube-api-access-2xxkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016412 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016466 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-swift-storage-0\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016493 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-sb\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016515 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016544 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-nb\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016579 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016598 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016621 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016644 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-config\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016660 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016680 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0a47fca1-5b0a-41c4-a75a-15e005e0e385-kube-api-access-mpwmw\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.016739 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jwf\" (UniqueName: \"kubernetes.io/projected/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-kube-api-access-85jwf\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.017198 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-logs\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.020096 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-svc\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.024986 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.026026 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-config-data\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.026972 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.028811 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.032770 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.041143 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.042420 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83d6fb5-2b7a-4982-9719-3a03aa125f00-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.043666 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xxkv\" (UniqueName: \"kubernetes.io/projected/f83d6fb5-2b7a-4982-9719-3a03aa125f00-kube-api-access-2xxkv\") pod \"nova-cell1-novncproxy-0\" (UID: \"f83d6fb5-2b7a-4982-9719-3a03aa125f00\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.043772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jwf\" (UniqueName: \"kubernetes.io/projected/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-kube-api-access-85jwf\") pod \"nova-metadata-0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.122440 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-config\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.122499 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0a47fca1-5b0a-41c4-a75a-15e005e0e385-kube-api-access-mpwmw\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.123226 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-svc\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.123511 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-swift-storage-0\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.123569 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-sb\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.124204 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-nb\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.124620 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-sb\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.125104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-swift-storage-0\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.125500 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-nb\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.125812 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-svc\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.126171 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-config\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.141831 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0a47fca1-5b0a-41c4-a75a-15e005e0e385-kube-api-access-mpwmw\") pod \"dnsmasq-dns-55556d9745-knxv2\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.142657 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.166555 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.251601 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:56 crc kubenswrapper[4922]: W0126 14:31:56.614770 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82bfcb0b_ed58_4b8d_8352_8199ee74bae0.slice/crio-9310f71fa3b27ad741a7ec634cd4eda359bca73b6de4be8d66877b78de4fe287 WatchSource:0}: Error finding container 9310f71fa3b27ad741a7ec634cd4eda359bca73b6de4be8d66877b78de4fe287: Status 404 returned error can't find the container with id 9310f71fa3b27ad741a7ec634cd4eda359bca73b6de4be8d66877b78de4fe287 Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.615688 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.691322 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.705334 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82bfcb0b-ed58-4b8d-8352-8199ee74bae0","Type":"ContainerStarted","Data":"9310f71fa3b27ad741a7ec634cd4eda359bca73b6de4be8d66877b78de4fe287"} Jan 26 14:31:56 crc kubenswrapper[4922]: I0126 14:31:56.874417 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55556d9745-knxv2"] Jan 26 14:31:56 crc kubenswrapper[4922]: W0126 14:31:56.888883 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a47fca1_5b0a_41c4_a75a_15e005e0e385.slice/crio-c81cd23c6c7721850e49e426f4cae55b0f4d37539e6fc3a29ef098ed0352f73f WatchSource:0}: Error finding container c81cd23c6c7721850e49e426f4cae55b0f4d37539e6fc3a29ef098ed0352f73f: Status 404 returned error can't find the container with id c81cd23c6c7721850e49e426f4cae55b0f4d37539e6fc3a29ef098ed0352f73f Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.103113 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2689d1-5817-40f6-86f8-f6b46ccdabf0" path="/var/lib/kubelet/pods/6a2689d1-5817-40f6-86f8-f6b46ccdabf0/volumes" Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.103834 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde45419-e8b1-4842-afc2-dcafbeacfa06" path="/var/lib/kubelet/pods/bde45419-e8b1-4842-afc2-dcafbeacfa06/volumes" Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.717846 4922 generic.go:334] "Generic (PLEG): container finished" podID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerID="2f909b6a53ceeb784cc00754f15f7b7c0532bb6deb06311a49088e9d49f9e18a" exitCode=0 Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.718131 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55556d9745-knxv2" event={"ID":"0a47fca1-5b0a-41c4-a75a-15e005e0e385","Type":"ContainerDied","Data":"2f909b6a53ceeb784cc00754f15f7b7c0532bb6deb06311a49088e9d49f9e18a"} Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.718156 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55556d9745-knxv2" event={"ID":"0a47fca1-5b0a-41c4-a75a-15e005e0e385","Type":"ContainerStarted","Data":"c81cd23c6c7721850e49e426f4cae55b0f4d37539e6fc3a29ef098ed0352f73f"} Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.723194 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f83d6fb5-2b7a-4982-9719-3a03aa125f00","Type":"ContainerStarted","Data":"4e0cafb0ff8241a24a69519c6c934075ca94fc27b2bd32a67b76d25be93dbc61"} Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.723235 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f83d6fb5-2b7a-4982-9719-3a03aa125f00","Type":"ContainerStarted","Data":"1610fa5667a4371dd50ae51871afa401745d13c9bfec808cff53c2364a6ded1d"} Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.742300 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82bfcb0b-ed58-4b8d-8352-8199ee74bae0","Type":"ContainerStarted","Data":"11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51"} Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.742343 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82bfcb0b-ed58-4b8d-8352-8199ee74bae0","Type":"ContainerStarted","Data":"ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5"} Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.772662 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.771218055 podStartE2EDuration="2.771218055s" podCreationTimestamp="2026-01-26 14:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:57.768629676 +0000 UTC m=+1334.970892448" watchObservedRunningTime="2026-01-26 14:31:57.771218055 +0000 UTC m=+1334.973480827" Jan 26 14:31:57 crc kubenswrapper[4922]: I0126 14:31:57.809221 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.809202533 podStartE2EDuration="2.809202533s" podCreationTimestamp="2026-01-26 14:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:57.798373132 +0000 UTC m=+1335.000635904" watchObservedRunningTime="2026-01-26 14:31:57.809202533 +0000 UTC m=+1335.011465305" Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.321465 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.321800 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-central-agent" containerID="cri-o://4ee4b423625678aeba661b2b6db24205062ec78bcaa918a6be37d1d581da32f9" gracePeriod=30 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.321862 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="proxy-httpd" containerID="cri-o://5eeb3efb19cd57deebf7443ca596450dfaea000e305c44816f93e3f3f57c7c81" gracePeriod=30 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.321955 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="sg-core" containerID="cri-o://8c579034241cee7c44654189913097803a8ef439c95da2b7c074531f86ff139b" gracePeriod=30 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.321955 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-notification-agent" containerID="cri-o://43777e9dca52b60fb9fa26474914ee6f56c72d1c342a54279098085c2c98a596" gracePeriod=30 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.364318 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.756521 4922 generic.go:334] "Generic (PLEG): container finished" podID="69c52940-14a9-49b1-84ab-40128358ed2d" containerID="5eeb3efb19cd57deebf7443ca596450dfaea000e305c44816f93e3f3f57c7c81" exitCode=0 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.756929 4922 generic.go:334] "Generic (PLEG): container finished" podID="69c52940-14a9-49b1-84ab-40128358ed2d" containerID="8c579034241cee7c44654189913097803a8ef439c95da2b7c074531f86ff139b" exitCode=2 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.756973 4922 generic.go:334] "Generic (PLEG): container finished" podID="69c52940-14a9-49b1-84ab-40128358ed2d" containerID="4ee4b423625678aeba661b2b6db24205062ec78bcaa918a6be37d1d581da32f9" exitCode=0 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.756574 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerDied","Data":"5eeb3efb19cd57deebf7443ca596450dfaea000e305c44816f93e3f3f57c7c81"} Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.757128 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerDied","Data":"8c579034241cee7c44654189913097803a8ef439c95da2b7c074531f86ff139b"} Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.757151 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerDied","Data":"4ee4b423625678aeba661b2b6db24205062ec78bcaa918a6be37d1d581da32f9"} Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.763537 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55556d9745-knxv2" event={"ID":"0a47fca1-5b0a-41c4-a75a-15e005e0e385","Type":"ContainerStarted","Data":"216636d782b5a086696b0ca146e89125c4544c9143f66567db02cc32f0b7950e"} Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.763771 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-log" containerID="cri-o://ef52a3192a2a409e7d785294151c4d4b2d4030227b07bf49b6dd2ed5e2289a88" gracePeriod=30 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.763854 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.763855 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-api" containerID="cri-o://0fa5f3f9824e862f1a4d3584c440e5c38d9907777bbddcfa4433de4e405adcd8" gracePeriod=30 Jan 26 14:31:58 crc kubenswrapper[4922]: I0126 14:31:58.791969 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55556d9745-knxv2" podStartSLOduration=3.791952842 podStartE2EDuration="3.791952842s" podCreationTimestamp="2026-01-26 14:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:31:58.78666378 +0000 UTC m=+1335.988926552" watchObservedRunningTime="2026-01-26 14:31:58.791952842 +0000 UTC m=+1335.994215614" Jan 26 14:31:59 crc kubenswrapper[4922]: I0126 14:31:59.773960 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerID="0fa5f3f9824e862f1a4d3584c440e5c38d9907777bbddcfa4433de4e405adcd8" exitCode=0 Jan 26 14:31:59 crc kubenswrapper[4922]: I0126 14:31:59.773991 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerID="ef52a3192a2a409e7d785294151c4d4b2d4030227b07bf49b6dd2ed5e2289a88" exitCode=143 Jan 26 14:31:59 crc kubenswrapper[4922]: I0126 14:31:59.774910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"592c0fd8-0a3a-410a-a26c-5f91d54365c6","Type":"ContainerDied","Data":"0fa5f3f9824e862f1a4d3584c440e5c38d9907777bbddcfa4433de4e405adcd8"} Jan 26 14:31:59 crc kubenswrapper[4922]: I0126 14:31:59.774940 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"592c0fd8-0a3a-410a-a26c-5f91d54365c6","Type":"ContainerDied","Data":"ef52a3192a2a409e7d785294151c4d4b2d4030227b07bf49b6dd2ed5e2289a88"} Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.108019 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.209010 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-combined-ca-bundle\") pod \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.209163 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592c0fd8-0a3a-410a-a26c-5f91d54365c6-logs\") pod \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.209248 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-config-data\") pod \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.209282 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kjrm\" (UniqueName: \"kubernetes.io/projected/592c0fd8-0a3a-410a-a26c-5f91d54365c6-kube-api-access-9kjrm\") pod \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\" (UID: \"592c0fd8-0a3a-410a-a26c-5f91d54365c6\") " Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.209583 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592c0fd8-0a3a-410a-a26c-5f91d54365c6-logs" (OuterVolumeSpecName: "logs") pod "592c0fd8-0a3a-410a-a26c-5f91d54365c6" (UID: "592c0fd8-0a3a-410a-a26c-5f91d54365c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.214524 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592c0fd8-0a3a-410a-a26c-5f91d54365c6-kube-api-access-9kjrm" (OuterVolumeSpecName: "kube-api-access-9kjrm") pod "592c0fd8-0a3a-410a-a26c-5f91d54365c6" (UID: "592c0fd8-0a3a-410a-a26c-5f91d54365c6"). InnerVolumeSpecName "kube-api-access-9kjrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.239983 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-config-data" (OuterVolumeSpecName: "config-data") pod "592c0fd8-0a3a-410a-a26c-5f91d54365c6" (UID: "592c0fd8-0a3a-410a-a26c-5f91d54365c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.261998 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "592c0fd8-0a3a-410a-a26c-5f91d54365c6" (UID: "592c0fd8-0a3a-410a-a26c-5f91d54365c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.312013 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.312087 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592c0fd8-0a3a-410a-a26c-5f91d54365c6-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.312105 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592c0fd8-0a3a-410a-a26c-5f91d54365c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.312121 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kjrm\" (UniqueName: \"kubernetes.io/projected/592c0fd8-0a3a-410a-a26c-5f91d54365c6-kube-api-access-9kjrm\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.788577 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.788576 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"592c0fd8-0a3a-410a-a26c-5f91d54365c6","Type":"ContainerDied","Data":"00045a88a4453ba490cb91b2a39d4c3687f3a55f499074d86b5a247a9b3ee93e"} Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.788757 4922 scope.go:117] "RemoveContainer" containerID="0fa5f3f9824e862f1a4d3584c440e5c38d9907777bbddcfa4433de4e405adcd8" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.795100 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerDied","Data":"43777e9dca52b60fb9fa26474914ee6f56c72d1c342a54279098085c2c98a596"} Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.795108 4922 generic.go:334] "Generic (PLEG): container finished" podID="69c52940-14a9-49b1-84ab-40128358ed2d" containerID="43777e9dca52b60fb9fa26474914ee6f56c72d1c342a54279098085c2c98a596" exitCode=0 Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.859845 4922 scope.go:117] "RemoveContainer" containerID="ef52a3192a2a409e7d785294151c4d4b2d4030227b07bf49b6dd2ed5e2289a88" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.862913 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.913576 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.932514 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:00 crc kubenswrapper[4922]: E0126 14:32:00.933294 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-api" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.933326 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-api" Jan 26 14:32:00 crc kubenswrapper[4922]: E0126 14:32:00.933361 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-log" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.933371 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-log" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.933691 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-api" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.933738 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" containerName="nova-api-log" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.935394 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.938920 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.939124 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.939311 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 14:32:00 crc kubenswrapper[4922]: I0126 14:32:00.951663 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.026428 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.026490 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a73eba-dd14-4190-9208-23218ff6bc07-logs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.026571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpkw\" (UniqueName: \"kubernetes.io/projected/e0a73eba-dd14-4190-9208-23218ff6bc07-kube-api-access-mcpkw\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.026621 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.026651 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.026671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-config-data\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.120892 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592c0fd8-0a3a-410a-a26c-5f91d54365c6" path="/var/lib/kubelet/pods/592c0fd8-0a3a-410a-a26c-5f91d54365c6/volumes" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.128801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpkw\" (UniqueName: \"kubernetes.io/projected/e0a73eba-dd14-4190-9208-23218ff6bc07-kube-api-access-mcpkw\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.128881 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.128917 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.128939 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-config-data\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.128990 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.129019 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a73eba-dd14-4190-9208-23218ff6bc07-logs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.136948 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a73eba-dd14-4190-9208-23218ff6bc07-logs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.141200 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.141338 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.141801 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.142037 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-config-data\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.142966 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.143021 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.148964 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpkw\" (UniqueName: \"kubernetes.io/projected/e0a73eba-dd14-4190-9208-23218ff6bc07-kube-api-access-mcpkw\") pod \"nova-api-0\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.167467 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.222050 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.266034 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332010 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-config-data\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332401 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-ceilometer-tls-certs\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332434 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-combined-ca-bundle\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332513 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns4kc\" (UniqueName: \"kubernetes.io/projected/69c52940-14a9-49b1-84ab-40128358ed2d-kube-api-access-ns4kc\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332600 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-run-httpd\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332654 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-log-httpd\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-scripts\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.332761 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-sg-core-conf-yaml\") pod \"69c52940-14a9-49b1-84ab-40128358ed2d\" (UID: \"69c52940-14a9-49b1-84ab-40128358ed2d\") " Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.333185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.333380 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.333804 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.333830 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69c52940-14a9-49b1-84ab-40128358ed2d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.338322 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c52940-14a9-49b1-84ab-40128358ed2d-kube-api-access-ns4kc" (OuterVolumeSpecName: "kube-api-access-ns4kc") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "kube-api-access-ns4kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.345266 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-scripts" (OuterVolumeSpecName: "scripts") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.372751 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.399897 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.435581 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.435620 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.435633 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.435646 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns4kc\" (UniqueName: \"kubernetes.io/projected/69c52940-14a9-49b1-84ab-40128358ed2d-kube-api-access-ns4kc\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.446690 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-config-data" (OuterVolumeSpecName: "config-data") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.454916 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69c52940-14a9-49b1-84ab-40128358ed2d" (UID: "69c52940-14a9-49b1-84ab-40128358ed2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.537237 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.537265 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c52940-14a9-49b1-84ab-40128358ed2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.706342 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:01 crc kubenswrapper[4922]: W0126 14:32:01.764040 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a73eba_dd14_4190_9208_23218ff6bc07.slice/crio-3146760b5119bde5ec3688106a80be0698cbe6bb0381a0bb242a2559a8e34602 WatchSource:0}: Error finding container 3146760b5119bde5ec3688106a80be0698cbe6bb0381a0bb242a2559a8e34602: Status 404 returned error can't find the container with id 3146760b5119bde5ec3688106a80be0698cbe6bb0381a0bb242a2559a8e34602 Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.820736 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69c52940-14a9-49b1-84ab-40128358ed2d","Type":"ContainerDied","Data":"c3cb64148f7f05ea2889ab88044a196ebb4970c0f3bbb4e04e0484c5b021f18c"} Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.820779 4922 scope.go:117] "RemoveContainer" containerID="5eeb3efb19cd57deebf7443ca596450dfaea000e305c44816f93e3f3f57c7c81" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.820922 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.823338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0a73eba-dd14-4190-9208-23218ff6bc07","Type":"ContainerStarted","Data":"3146760b5119bde5ec3688106a80be0698cbe6bb0381a0bb242a2559a8e34602"} Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.945193 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.967707 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.984509 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:32:01 crc kubenswrapper[4922]: E0126 14:32:01.984955 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-central-agent" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.984975 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-central-agent" Jan 26 14:32:01 crc kubenswrapper[4922]: E0126 14:32:01.984988 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-notification-agent" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.984995 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-notification-agent" Jan 26 14:32:01 crc kubenswrapper[4922]: E0126 14:32:01.985009 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="sg-core" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.985015 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="sg-core" Jan 26 14:32:01 crc kubenswrapper[4922]: E0126 14:32:01.985040 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="proxy-httpd" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.985045 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="proxy-httpd" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.985270 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="proxy-httpd" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.985282 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-central-agent" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.985303 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="ceilometer-notification-agent" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.985310 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" containerName="sg-core" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.987221 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.992382 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.992657 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 14:32:01 crc kubenswrapper[4922]: I0126 14:32:01.994314 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.012228 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.014456 4922 scope.go:117] "RemoveContainer" containerID="8c579034241cee7c44654189913097803a8ef439c95da2b7c074531f86ff139b" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.037148 4922 scope.go:117] "RemoveContainer" containerID="43777e9dca52b60fb9fa26474914ee6f56c72d1c342a54279098085c2c98a596" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.047784 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-config-data\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.047838 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-scripts\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.047875 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31909446-1712-442b-a346-7b4bb84f8584-run-httpd\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.047941 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31909446-1712-442b-a346-7b4bb84f8584-log-httpd\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.048043 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bsdm\" (UniqueName: \"kubernetes.io/projected/31909446-1712-442b-a346-7b4bb84f8584-kube-api-access-7bsdm\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.048077 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.048107 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.048190 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.059903 4922 scope.go:117] "RemoveContainer" containerID="4ee4b423625678aeba661b2b6db24205062ec78bcaa918a6be37d1d581da32f9" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150523 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150615 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150744 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-config-data\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150785 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-scripts\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150835 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31909446-1712-442b-a346-7b4bb84f8584-run-httpd\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.150952 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31909446-1712-442b-a346-7b4bb84f8584-log-httpd\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.151144 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bsdm\" (UniqueName: \"kubernetes.io/projected/31909446-1712-442b-a346-7b4bb84f8584-kube-api-access-7bsdm\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.151627 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31909446-1712-442b-a346-7b4bb84f8584-run-httpd\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.151744 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31909446-1712-442b-a346-7b4bb84f8584-log-httpd\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.156954 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.157811 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-scripts\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.161659 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-config-data\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.163958 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.164233 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/31909446-1712-442b-a346-7b4bb84f8584-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.196614 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bsdm\" (UniqueName: \"kubernetes.io/projected/31909446-1712-442b-a346-7b4bb84f8584-kube-api-access-7bsdm\") pod \"ceilometer-0\" (UID: \"31909446-1712-442b-a346-7b4bb84f8584\") " pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.308342 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 14:32:02 crc kubenswrapper[4922]: W0126 14:32:02.799832 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31909446_1712_442b_a346_7b4bb84f8584.slice/crio-beddf7fa916f9cdb5441fe7260def9b7a7be66f78d310c2a1723291b9a31a549 WatchSource:0}: Error finding container beddf7fa916f9cdb5441fe7260def9b7a7be66f78d310c2a1723291b9a31a549: Status 404 returned error can't find the container with id beddf7fa916f9cdb5441fe7260def9b7a7be66f78d310c2a1723291b9a31a549 Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.811609 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.845926 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0a73eba-dd14-4190-9208-23218ff6bc07","Type":"ContainerStarted","Data":"d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329"} Jan 26 14:32:02 crc kubenswrapper[4922]: I0126 14:32:02.848130 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31909446-1712-442b-a346-7b4bb84f8584","Type":"ContainerStarted","Data":"beddf7fa916f9cdb5441fe7260def9b7a7be66f78d310c2a1723291b9a31a549"} Jan 26 14:32:03 crc kubenswrapper[4922]: I0126 14:32:03.108400 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c52940-14a9-49b1-84ab-40128358ed2d" path="/var/lib/kubelet/pods/69c52940-14a9-49b1-84ab-40128358ed2d/volumes" Jan 26 14:32:03 crc kubenswrapper[4922]: I0126 14:32:03.867335 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0a73eba-dd14-4190-9208-23218ff6bc07","Type":"ContainerStarted","Data":"4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5"} Jan 26 14:32:03 crc kubenswrapper[4922]: I0126 14:32:03.902207 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.9021838840000003 podStartE2EDuration="3.902183884s" podCreationTimestamp="2026-01-26 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:32:03.891720523 +0000 UTC m=+1341.093983305" watchObservedRunningTime="2026-01-26 14:32:03.902183884 +0000 UTC m=+1341.104446666" Jan 26 14:32:04 crc kubenswrapper[4922]: I0126 14:32:04.878431 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31909446-1712-442b-a346-7b4bb84f8584","Type":"ContainerStarted","Data":"49b33ead720bb4fc82e7ff9f5c731d861f120c782c67f07161951a9e1cfaa79b"} Jan 26 14:32:04 crc kubenswrapper[4922]: I0126 14:32:04.879031 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31909446-1712-442b-a346-7b4bb84f8584","Type":"ContainerStarted","Data":"f6d98215f4d42f304c85be007a849f36c2b9c5c09c83cdc005f7d8045667a003"} Jan 26 14:32:05 crc kubenswrapper[4922]: I0126 14:32:05.890410 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31909446-1712-442b-a346-7b4bb84f8584","Type":"ContainerStarted","Data":"7c11109f4c4bc7eb6a755d2a1ffe828f60884b322665f93c951c26da4ba34b9b"} Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.143508 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.143603 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.167352 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.193031 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.253386 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.330417 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb96c9b4c-rh7c6"] Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.330759 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerName="dnsmasq-dns" containerID="cri-o://d5301014e3f433953b4ac4ac2c9603d4579c4f6f1fc5ff104e6f21f2c4ae1480" gracePeriod=10 Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.901117 4922 generic.go:334] "Generic (PLEG): container finished" podID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerID="d5301014e3f433953b4ac4ac2c9603d4579c4f6f1fc5ff104e6f21f2c4ae1480" exitCode=0 Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.901154 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" event={"ID":"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f","Type":"ContainerDied","Data":"d5301014e3f433953b4ac4ac2c9603d4579c4f6f1fc5ff104e6f21f2c4ae1480"} Jan 26 14:32:06 crc kubenswrapper[4922]: I0126 14:32:06.920747 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.112647 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nt6h5"] Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.113824 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.117741 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.117901 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.128250 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nt6h5"] Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.171167 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.171431 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.266096 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-config-data\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.266404 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tccmx\" (UniqueName: \"kubernetes.io/projected/2bcbc70c-9471-43ef-9411-bda440b81b54-kube-api-access-tccmx\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.266491 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.266571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-scripts\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.368841 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-scripts\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.369529 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-config-data\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.369653 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tccmx\" (UniqueName: \"kubernetes.io/projected/2bcbc70c-9471-43ef-9411-bda440b81b54-kube-api-access-tccmx\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.369821 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.374640 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.384657 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-config-data\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.396451 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-scripts\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.406535 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tccmx\" (UniqueName: \"kubernetes.io/projected/2bcbc70c-9471-43ef-9411-bda440b81b54-kube-api-access-tccmx\") pod \"nova-cell1-cell-mapping-nt6h5\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.437569 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.592350 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.679669 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-config\") pod \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.680083 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-nb\") pod \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.680140 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jf9\" (UniqueName: \"kubernetes.io/projected/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-kube-api-access-b4jf9\") pod \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.680160 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-svc\") pod \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.680347 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-swift-storage-0\") pod \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.680391 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-sb\") pod \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\" (UID: \"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f\") " Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.687237 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-kube-api-access-b4jf9" (OuterVolumeSpecName: "kube-api-access-b4jf9") pod "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" (UID: "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f"). InnerVolumeSpecName "kube-api-access-b4jf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.761933 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" (UID: "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.789451 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.789491 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jf9\" (UniqueName: \"kubernetes.io/projected/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-kube-api-access-b4jf9\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.810630 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-config" (OuterVolumeSpecName: "config") pod "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" (UID: "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.814690 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" (UID: "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.882560 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" (UID: "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.886600 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" (UID: "aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.897075 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.897107 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.897119 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.897128 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.945655 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31909446-1712-442b-a346-7b4bb84f8584","Type":"ContainerStarted","Data":"790634d4669c709a87e98ff48a6fd79c07824801c9a8ad788c9defc772041f1d"} Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.946220 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.963912 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.964105 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb96c9b4c-rh7c6" event={"ID":"aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f","Type":"ContainerDied","Data":"f5cfe739c7830a35f36ace1835859c8a9b16611fbd72a43d47440f6171603fdb"} Jan 26 14:32:07 crc kubenswrapper[4922]: I0126 14:32:07.964161 4922 scope.go:117] "RemoveContainer" containerID="d5301014e3f433953b4ac4ac2c9603d4579c4f6f1fc5ff104e6f21f2c4ae1480" Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:07.998975 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.520168656 podStartE2EDuration="6.998945003s" podCreationTimestamp="2026-01-26 14:32:01 +0000 UTC" firstStartedPulling="2026-01-26 14:32:02.803612181 +0000 UTC m=+1340.005874983" lastFinishedPulling="2026-01-26 14:32:07.282388558 +0000 UTC m=+1344.484651330" observedRunningTime="2026-01-26 14:32:07.985364289 +0000 UTC m=+1345.187627061" watchObservedRunningTime="2026-01-26 14:32:07.998945003 +0000 UTC m=+1345.201207775" Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:08.013305 4922 scope.go:117] "RemoveContainer" containerID="8fdaaa5f4795d71ccf1e99a64e78356bdafbe2a5432963066243279e8b2d9555" Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:08.030084 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb96c9b4c-rh7c6"] Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:08.041891 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fb96c9b4c-rh7c6"] Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:08.099937 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nt6h5"] Jan 26 14:32:08 crc kubenswrapper[4922]: W0126 14:32:08.104155 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bcbc70c_9471_43ef_9411_bda440b81b54.slice/crio-d7e000d1ada578bc4ca5e86e4ccc57f8c0e6445a962daa230a14c678bfaff295 WatchSource:0}: Error finding container d7e000d1ada578bc4ca5e86e4ccc57f8c0e6445a962daa230a14c678bfaff295: Status 404 returned error can't find the container with id d7e000d1ada578bc4ca5e86e4ccc57f8c0e6445a962daa230a14c678bfaff295 Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:08.980615 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nt6h5" event={"ID":"2bcbc70c-9471-43ef-9411-bda440b81b54","Type":"ContainerStarted","Data":"f1e0655729c41ea7b98b471b04d82dc44542b04edfd647d7a49f4689f5ee5901"} Jan 26 14:32:08 crc kubenswrapper[4922]: I0126 14:32:08.980934 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nt6h5" event={"ID":"2bcbc70c-9471-43ef-9411-bda440b81b54","Type":"ContainerStarted","Data":"d7e000d1ada578bc4ca5e86e4ccc57f8c0e6445a962daa230a14c678bfaff295"} Jan 26 14:32:09 crc kubenswrapper[4922]: I0126 14:32:09.002943 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nt6h5" podStartSLOduration=2.002926211 podStartE2EDuration="2.002926211s" podCreationTimestamp="2026-01-26 14:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:32:08.993094017 +0000 UTC m=+1346.195356789" watchObservedRunningTime="2026-01-26 14:32:09.002926211 +0000 UTC m=+1346.205188983" Jan 26 14:32:09 crc kubenswrapper[4922]: I0126 14:32:09.108035 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" path="/var/lib/kubelet/pods/aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f/volumes" Jan 26 14:32:11 crc kubenswrapper[4922]: I0126 14:32:11.267187 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:32:11 crc kubenswrapper[4922]: I0126 14:32:11.268426 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:32:11 crc kubenswrapper[4922]: I0126 14:32:11.306923 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:32:11 crc kubenswrapper[4922]: I0126 14:32:11.307022 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:32:12 crc kubenswrapper[4922]: I0126 14:32:12.284636 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:12 crc kubenswrapper[4922]: I0126 14:32:12.284623 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:14 crc kubenswrapper[4922]: I0126 14:32:14.048551 4922 generic.go:334] "Generic (PLEG): container finished" podID="2bcbc70c-9471-43ef-9411-bda440b81b54" containerID="f1e0655729c41ea7b98b471b04d82dc44542b04edfd647d7a49f4689f5ee5901" exitCode=0 Jan 26 14:32:14 crc kubenswrapper[4922]: I0126 14:32:14.048698 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nt6h5" event={"ID":"2bcbc70c-9471-43ef-9411-bda440b81b54","Type":"ContainerDied","Data":"f1e0655729c41ea7b98b471b04d82dc44542b04edfd647d7a49f4689f5ee5901"} Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.476376 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.670285 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-config-data\") pod \"2bcbc70c-9471-43ef-9411-bda440b81b54\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.670956 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-combined-ca-bundle\") pod \"2bcbc70c-9471-43ef-9411-bda440b81b54\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.671005 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tccmx\" (UniqueName: \"kubernetes.io/projected/2bcbc70c-9471-43ef-9411-bda440b81b54-kube-api-access-tccmx\") pod \"2bcbc70c-9471-43ef-9411-bda440b81b54\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.671113 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-scripts\") pod \"2bcbc70c-9471-43ef-9411-bda440b81b54\" (UID: \"2bcbc70c-9471-43ef-9411-bda440b81b54\") " Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.676852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bcbc70c-9471-43ef-9411-bda440b81b54-kube-api-access-tccmx" (OuterVolumeSpecName: "kube-api-access-tccmx") pod "2bcbc70c-9471-43ef-9411-bda440b81b54" (UID: "2bcbc70c-9471-43ef-9411-bda440b81b54"). InnerVolumeSpecName "kube-api-access-tccmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.682349 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-scripts" (OuterVolumeSpecName: "scripts") pod "2bcbc70c-9471-43ef-9411-bda440b81b54" (UID: "2bcbc70c-9471-43ef-9411-bda440b81b54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.699015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-config-data" (OuterVolumeSpecName: "config-data") pod "2bcbc70c-9471-43ef-9411-bda440b81b54" (UID: "2bcbc70c-9471-43ef-9411-bda440b81b54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.711536 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bcbc70c-9471-43ef-9411-bda440b81b54" (UID: "2bcbc70c-9471-43ef-9411-bda440b81b54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.773370 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.773816 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.773915 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tccmx\" (UniqueName: \"kubernetes.io/projected/2bcbc70c-9471-43ef-9411-bda440b81b54-kube-api-access-tccmx\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:15 crc kubenswrapper[4922]: I0126 14:32:15.774026 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bcbc70c-9471-43ef-9411-bda440b81b54-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.079708 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nt6h5" event={"ID":"2bcbc70c-9471-43ef-9411-bda440b81b54","Type":"ContainerDied","Data":"d7e000d1ada578bc4ca5e86e4ccc57f8c0e6445a962daa230a14c678bfaff295"} Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.079777 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e000d1ada578bc4ca5e86e4ccc57f8c0e6445a962daa230a14c678bfaff295" Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.079793 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nt6h5" Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.150921 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.155241 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.161203 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.420359 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.421208 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-log" containerID="cri-o://d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329" gracePeriod=30 Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.421336 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-api" containerID="cri-o://4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5" gracePeriod=30 Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.441798 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.442347 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" containerName="nova-scheduler-scheduler" containerID="cri-o://aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f" gracePeriod=30 Jan 26 14:32:16 crc kubenswrapper[4922]: I0126 14:32:16.511316 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:32:16 crc kubenswrapper[4922]: E0126 14:32:16.907182 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 14:32:16 crc kubenswrapper[4922]: E0126 14:32:16.909183 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 14:32:16 crc kubenswrapper[4922]: E0126 14:32:16.910706 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 14:32:16 crc kubenswrapper[4922]: E0126 14:32:16.910904 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" containerName="nova-scheduler-scheduler" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.090994 4922 generic.go:334] "Generic (PLEG): container finished" podID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerID="d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329" exitCode=143 Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.092802 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0a73eba-dd14-4190-9208-23218ff6bc07","Type":"ContainerDied","Data":"d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329"} Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.106707 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.732770 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.836001 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-config-data\") pod \"e0a73eba-dd14-4190-9208-23218ff6bc07\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.836118 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a73eba-dd14-4190-9208-23218ff6bc07-logs\") pod \"e0a73eba-dd14-4190-9208-23218ff6bc07\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.836159 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-internal-tls-certs\") pod \"e0a73eba-dd14-4190-9208-23218ff6bc07\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.836352 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcpkw\" (UniqueName: \"kubernetes.io/projected/e0a73eba-dd14-4190-9208-23218ff6bc07-kube-api-access-mcpkw\") pod \"e0a73eba-dd14-4190-9208-23218ff6bc07\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.836463 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-combined-ca-bundle\") pod \"e0a73eba-dd14-4190-9208-23218ff6bc07\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.836496 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-public-tls-certs\") pod \"e0a73eba-dd14-4190-9208-23218ff6bc07\" (UID: \"e0a73eba-dd14-4190-9208-23218ff6bc07\") " Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.837004 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a73eba-dd14-4190-9208-23218ff6bc07-logs" (OuterVolumeSpecName: "logs") pod "e0a73eba-dd14-4190-9208-23218ff6bc07" (UID: "e0a73eba-dd14-4190-9208-23218ff6bc07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.837144 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a73eba-dd14-4190-9208-23218ff6bc07-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.846250 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a73eba-dd14-4190-9208-23218ff6bc07-kube-api-access-mcpkw" (OuterVolumeSpecName: "kube-api-access-mcpkw") pod "e0a73eba-dd14-4190-9208-23218ff6bc07" (UID: "e0a73eba-dd14-4190-9208-23218ff6bc07"). InnerVolumeSpecName "kube-api-access-mcpkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.872160 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-config-data" (OuterVolumeSpecName: "config-data") pod "e0a73eba-dd14-4190-9208-23218ff6bc07" (UID: "e0a73eba-dd14-4190-9208-23218ff6bc07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.874539 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0a73eba-dd14-4190-9208-23218ff6bc07" (UID: "e0a73eba-dd14-4190-9208-23218ff6bc07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.898827 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e0a73eba-dd14-4190-9208-23218ff6bc07" (UID: "e0a73eba-dd14-4190-9208-23218ff6bc07"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.912926 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e0a73eba-dd14-4190-9208-23218ff6bc07" (UID: "e0a73eba-dd14-4190-9208-23218ff6bc07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.939655 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcpkw\" (UniqueName: \"kubernetes.io/projected/e0a73eba-dd14-4190-9208-23218ff6bc07-kube-api-access-mcpkw\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.939708 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.939727 4922 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.939756 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:17 crc kubenswrapper[4922]: I0126 14:32:17.939776 4922 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a73eba-dd14-4190-9208-23218ff6bc07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.103015 4922 generic.go:334] "Generic (PLEG): container finished" podID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerID="4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5" exitCode=0 Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.103214 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.103333 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-log" containerID="cri-o://ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5" gracePeriod=30 Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.103414 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-metadata" containerID="cri-o://11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51" gracePeriod=30 Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.103846 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0a73eba-dd14-4190-9208-23218ff6bc07","Type":"ContainerDied","Data":"4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5"} Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.103963 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0a73eba-dd14-4190-9208-23218ff6bc07","Type":"ContainerDied","Data":"3146760b5119bde5ec3688106a80be0698cbe6bb0381a0bb242a2559a8e34602"} Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.104000 4922 scope.go:117] "RemoveContainer" containerID="4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.164887 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.181219 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.192421 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.192843 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-api" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.192859 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-api" Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.192883 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerName="dnsmasq-dns" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.192893 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerName="dnsmasq-dns" Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.192925 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcbc70c-9471-43ef-9411-bda440b81b54" containerName="nova-manage" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.192936 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcbc70c-9471-43ef-9411-bda440b81b54" containerName="nova-manage" Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.192971 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerName="init" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.192983 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerName="init" Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.193003 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-log" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.193015 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-log" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.193258 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb4090c-6ab3-4d9b-b406-9c5c1f85c63f" containerName="dnsmasq-dns" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.193281 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bcbc70c-9471-43ef-9411-bda440b81b54" containerName="nova-manage" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.193304 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-api" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.193322 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" containerName="nova-api-log" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.196689 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.210095 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.210700 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.212274 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.217183 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.250142 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec77602e-4cce-4d70-90ec-6d6adc5f6643-logs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.250219 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.250261 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.250298 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.250320 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-config-data\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.250426 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmr7v\" (UniqueName: \"kubernetes.io/projected/ec77602e-4cce-4d70-90ec-6d6adc5f6643-kube-api-access-nmr7v\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.277815 4922 scope.go:117] "RemoveContainer" containerID="d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.301516 4922 scope.go:117] "RemoveContainer" containerID="4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5" Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.304531 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5\": container with ID starting with 4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5 not found: ID does not exist" containerID="4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.304568 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5"} err="failed to get container status \"4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5\": rpc error: code = NotFound desc = could not find container \"4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5\": container with ID starting with 4bdb005a42a6e69ca929c8b0b40aa0b792eeef8b43757a407a4ce48f68e3dfa5 not found: ID does not exist" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.304613 4922 scope.go:117] "RemoveContainer" containerID="d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329" Jan 26 14:32:18 crc kubenswrapper[4922]: E0126 14:32:18.304907 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329\": container with ID starting with d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329 not found: ID does not exist" containerID="d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.304952 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329"} err="failed to get container status \"d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329\": rpc error: code = NotFound desc = could not find container \"d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329\": container with ID starting with d3326e3525afa08a6b23c1a5c419061e0c4a41fd4549a6cb15738c85b0317329 not found: ID does not exist" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.352394 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmr7v\" (UniqueName: \"kubernetes.io/projected/ec77602e-4cce-4d70-90ec-6d6adc5f6643-kube-api-access-nmr7v\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.352471 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec77602e-4cce-4d70-90ec-6d6adc5f6643-logs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.352507 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.352535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.352566 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.352588 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-config-data\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.353492 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec77602e-4cce-4d70-90ec-6d6adc5f6643-logs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.356632 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.356638 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.357509 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-config-data\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.357558 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec77602e-4cce-4d70-90ec-6d6adc5f6643-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.374571 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmr7v\" (UniqueName: \"kubernetes.io/projected/ec77602e-4cce-4d70-90ec-6d6adc5f6643-kube-api-access-nmr7v\") pod \"nova-api-0\" (UID: \"ec77602e-4cce-4d70-90ec-6d6adc5f6643\") " pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.520352 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 14:32:18 crc kubenswrapper[4922]: I0126 14:32:18.990735 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.109148 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a73eba-dd14-4190-9208-23218ff6bc07" path="/var/lib/kubelet/pods/e0a73eba-dd14-4190-9208-23218ff6bc07/volumes" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.114317 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec77602e-4cce-4d70-90ec-6d6adc5f6643","Type":"ContainerStarted","Data":"e0b2f391c87520f91f73f747f634579f0d3ad36c18f77b508ac2567fdfe7988b"} Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.116290 4922 generic.go:334] "Generic (PLEG): container finished" podID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerID="ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5" exitCode=143 Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.116339 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82bfcb0b-ed58-4b8d-8352-8199ee74bae0","Type":"ContainerDied","Data":"ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5"} Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.698387 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.892756 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-logs\") pod \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.892848 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-config-data\") pod \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.892895 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85jwf\" (UniqueName: \"kubernetes.io/projected/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-kube-api-access-85jwf\") pod \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.893016 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-combined-ca-bundle\") pod \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.893185 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-nova-metadata-tls-certs\") pod \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\" (UID: \"82bfcb0b-ed58-4b8d-8352-8199ee74bae0\") " Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.893605 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-logs" (OuterVolumeSpecName: "logs") pod "82bfcb0b-ed58-4b8d-8352-8199ee74bae0" (UID: "82bfcb0b-ed58-4b8d-8352-8199ee74bae0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.893691 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.909272 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-kube-api-access-85jwf" (OuterVolumeSpecName: "kube-api-access-85jwf") pod "82bfcb0b-ed58-4b8d-8352-8199ee74bae0" (UID: "82bfcb0b-ed58-4b8d-8352-8199ee74bae0"). InnerVolumeSpecName "kube-api-access-85jwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.922565 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-config-data" (OuterVolumeSpecName: "config-data") pod "82bfcb0b-ed58-4b8d-8352-8199ee74bae0" (UID: "82bfcb0b-ed58-4b8d-8352-8199ee74bae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.942928 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82bfcb0b-ed58-4b8d-8352-8199ee74bae0" (UID: "82bfcb0b-ed58-4b8d-8352-8199ee74bae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.946586 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "82bfcb0b-ed58-4b8d-8352-8199ee74bae0" (UID: "82bfcb0b-ed58-4b8d-8352-8199ee74bae0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.996712 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.997972 4922 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.998144 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:19 crc kubenswrapper[4922]: I0126 14:32:19.998225 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85jwf\" (UniqueName: \"kubernetes.io/projected/82bfcb0b-ed58-4b8d-8352-8199ee74bae0-kube-api-access-85jwf\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.140184 4922 generic.go:334] "Generic (PLEG): container finished" podID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerID="11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51" exitCode=0 Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.140242 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82bfcb0b-ed58-4b8d-8352-8199ee74bae0","Type":"ContainerDied","Data":"11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51"} Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.140267 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82bfcb0b-ed58-4b8d-8352-8199ee74bae0","Type":"ContainerDied","Data":"9310f71fa3b27ad741a7ec634cd4eda359bca73b6de4be8d66877b78de4fe287"} Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.140284 4922 scope.go:117] "RemoveContainer" containerID="11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.140354 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.144017 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec77602e-4cce-4d70-90ec-6d6adc5f6643","Type":"ContainerStarted","Data":"78c11bdd3a2cb43df514842029c277efcc7816f75967d36f40d36e149929425c"} Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.144128 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec77602e-4cce-4d70-90ec-6d6adc5f6643","Type":"ContainerStarted","Data":"d73977407927f7b730e6b0de46d6e053db6839ff85385ba99b78f7a92ce44b8a"} Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.175611 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.175586614 podStartE2EDuration="2.175586614s" podCreationTimestamp="2026-01-26 14:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:32:20.168217168 +0000 UTC m=+1357.370479960" watchObservedRunningTime="2026-01-26 14:32:20.175586614 +0000 UTC m=+1357.377849396" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.186805 4922 scope.go:117] "RemoveContainer" containerID="ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.197724 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.209803 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.244352 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:32:20 crc kubenswrapper[4922]: E0126 14:32:20.244806 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-log" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.244823 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-log" Jan 26 14:32:20 crc kubenswrapper[4922]: E0126 14:32:20.244879 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-metadata" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.244887 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-metadata" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.245107 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-log" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.245134 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" containerName="nova-metadata-metadata" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.246333 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.246417 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.270097 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.270188 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.281297 4922 scope.go:117] "RemoveContainer" containerID="11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51" Jan 26 14:32:20 crc kubenswrapper[4922]: E0126 14:32:20.281745 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51\": container with ID starting with 11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51 not found: ID does not exist" containerID="11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.281780 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51"} err="failed to get container status \"11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51\": rpc error: code = NotFound desc = could not find container \"11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51\": container with ID starting with 11422c5289e35a76692e4598e5a34ed161a534892dd21743313e16b2dc307e51 not found: ID does not exist" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.281799 4922 scope.go:117] "RemoveContainer" containerID="ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5" Jan 26 14:32:20 crc kubenswrapper[4922]: E0126 14:32:20.282959 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5\": container with ID starting with ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5 not found: ID does not exist" containerID="ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.282997 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5"} err="failed to get container status \"ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5\": rpc error: code = NotFound desc = could not find container \"ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5\": container with ID starting with ed50857bc99b6498fb2b359c591f1df1773b6ff84551527e50829c0f10fbefc5 not found: ID does not exist" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.412378 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-config-data\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.412438 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sf7d\" (UniqueName: \"kubernetes.io/projected/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-kube-api-access-9sf7d\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.412475 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-logs\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.412560 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.412638 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.515279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.515413 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-config-data\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.515477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sf7d\" (UniqueName: \"kubernetes.io/projected/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-kube-api-access-9sf7d\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.515522 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-logs\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.515651 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.518025 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-logs\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.524137 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.533828 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-config-data\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.534883 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.547013 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sf7d\" (UniqueName: \"kubernetes.io/projected/d01bf414-0bdd-49f2-aa15-54f8ddb04d7b-kube-api-access-9sf7d\") pod \"nova-metadata-0\" (UID: \"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b\") " pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.589707 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.900723 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzrk6"] Jan 26 14:32:20 crc kubenswrapper[4922]: I0126 14:32:20.905832 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:20.919305 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzrk6"] Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.026406 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjbh\" (UniqueName: \"kubernetes.io/projected/c07d78d3-3263-488b-b3a0-640f091363f2-kube-api-access-2hjbh\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.026457 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-catalog-content\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.026546 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-utilities\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.045520 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.102301 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bfcb0b-ed58-4b8d-8352-8199ee74bae0" path="/var/lib/kubelet/pods/82bfcb0b-ed58-4b8d-8352-8199ee74bae0/volumes" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.127720 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-utilities\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.127850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hjbh\" (UniqueName: \"kubernetes.io/projected/c07d78d3-3263-488b-b3a0-640f091363f2-kube-api-access-2hjbh\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.127879 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-catalog-content\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.128349 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-catalog-content\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.128563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-utilities\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.149292 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hjbh\" (UniqueName: \"kubernetes.io/projected/c07d78d3-3263-488b-b3a0-640f091363f2-kube-api-access-2hjbh\") pod \"redhat-operators-fzrk6\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.157200 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b","Type":"ContainerStarted","Data":"42a65d0e6568dc917a49938c57de1ce736554a4b227e9f55947f21d7caf4c6e7"} Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.162539 4922 generic.go:334] "Generic (PLEG): container finished" podID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" containerID="aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f" exitCode=0 Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.162620 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfad7c05-9d6a-4435-b019-731d7e1c5ec3","Type":"ContainerDied","Data":"aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f"} Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.162641 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfad7c05-9d6a-4435-b019-731d7e1c5ec3","Type":"ContainerDied","Data":"bcd0c133f3196f8188be575d30c0798e479734d2e25cee70abbdb0ca0248a710"} Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.162652 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd0c133f3196f8188be575d30c0798e479734d2e25cee70abbdb0ca0248a710" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.232495 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.243026 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.433537 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-config-data\") pod \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.433955 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m92h\" (UniqueName: \"kubernetes.io/projected/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-kube-api-access-5m92h\") pod \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.434090 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-combined-ca-bundle\") pod \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\" (UID: \"cfad7c05-9d6a-4435-b019-731d7e1c5ec3\") " Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.438276 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-kube-api-access-5m92h" (OuterVolumeSpecName: "kube-api-access-5m92h") pod "cfad7c05-9d6a-4435-b019-731d7e1c5ec3" (UID: "cfad7c05-9d6a-4435-b019-731d7e1c5ec3"). InnerVolumeSpecName "kube-api-access-5m92h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.462457 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-config-data" (OuterVolumeSpecName: "config-data") pod "cfad7c05-9d6a-4435-b019-731d7e1c5ec3" (UID: "cfad7c05-9d6a-4435-b019-731d7e1c5ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.481908 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfad7c05-9d6a-4435-b019-731d7e1c5ec3" (UID: "cfad7c05-9d6a-4435-b019-731d7e1c5ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.536699 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m92h\" (UniqueName: \"kubernetes.io/projected/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-kube-api-access-5m92h\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.536732 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:21 crc kubenswrapper[4922]: I0126 14:32:21.536746 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfad7c05-9d6a-4435-b019-731d7e1c5ec3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.219391 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.220815 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b","Type":"ContainerStarted","Data":"ade867ff96f119813e14ea91f8a753ca129905a5c7d68ded976ea88de5848475"} Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.220864 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d01bf414-0bdd-49f2-aa15-54f8ddb04d7b","Type":"ContainerStarted","Data":"d5bd580b583f7bd539b10cadd9737ffe01a9a3576c038ae9d5319b484766040e"} Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.228507 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzrk6"] Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.274654 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.274615401 podStartE2EDuration="2.274615401s" podCreationTimestamp="2026-01-26 14:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:32:22.248600454 +0000 UTC m=+1359.450863226" watchObservedRunningTime="2026-01-26 14:32:22.274615401 +0000 UTC m=+1359.476878173" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.291993 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.306449 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.318025 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:32:22 crc kubenswrapper[4922]: E0126 14:32:22.318426 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" containerName="nova-scheduler-scheduler" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.318445 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" containerName="nova-scheduler-scheduler" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.318648 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" containerName="nova-scheduler-scheduler" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.319318 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.323435 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.332135 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.463270 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.463567 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsr9x\" (UniqueName: \"kubernetes.io/projected/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-kube-api-access-fsr9x\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.463687 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-config-data\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.611697 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-config-data\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.611951 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.612101 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsr9x\" (UniqueName: \"kubernetes.io/projected/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-kube-api-access-fsr9x\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.620133 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.620594 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-config-data\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.641527 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsr9x\" (UniqueName: \"kubernetes.io/projected/ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77-kube-api-access-fsr9x\") pod \"nova-scheduler-0\" (UID: \"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77\") " pod="openstack/nova-scheduler-0" Jan 26 14:32:22 crc kubenswrapper[4922]: I0126 14:32:22.707956 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 14:32:23 crc kubenswrapper[4922]: I0126 14:32:23.110398 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfad7c05-9d6a-4435-b019-731d7e1c5ec3" path="/var/lib/kubelet/pods/cfad7c05-9d6a-4435-b019-731d7e1c5ec3/volumes" Jan 26 14:32:23 crc kubenswrapper[4922]: I0126 14:32:23.143254 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 14:32:23 crc kubenswrapper[4922]: W0126 14:32:23.143933 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef2cf2cb_aa43_4e4d_be50_8476dcdc6f77.slice/crio-6f529268cb9769ad0d845a6417986e1302f223ea3e0407d7f0d80d45555a165f WatchSource:0}: Error finding container 6f529268cb9769ad0d845a6417986e1302f223ea3e0407d7f0d80d45555a165f: Status 404 returned error can't find the container with id 6f529268cb9769ad0d845a6417986e1302f223ea3e0407d7f0d80d45555a165f Jan 26 14:32:23 crc kubenswrapper[4922]: I0126 14:32:23.237597 4922 generic.go:334] "Generic (PLEG): container finished" podID="c07d78d3-3263-488b-b3a0-640f091363f2" containerID="5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe" exitCode=0 Jan 26 14:32:23 crc kubenswrapper[4922]: I0126 14:32:23.237709 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerDied","Data":"5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe"} Jan 26 14:32:23 crc kubenswrapper[4922]: I0126 14:32:23.238138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerStarted","Data":"27c6e49c6394926e2d0c997afb219cf8cc3d80d452d4f661b2d78f25a60dea66"} Jan 26 14:32:23 crc kubenswrapper[4922]: I0126 14:32:23.240514 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77","Type":"ContainerStarted","Data":"6f529268cb9769ad0d845a6417986e1302f223ea3e0407d7f0d80d45555a165f"} Jan 26 14:32:24 crc kubenswrapper[4922]: I0126 14:32:24.252624 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77","Type":"ContainerStarted","Data":"e5783deedeabdb082f6feea353a4da55d6fa0d7f6bffc6020050e431cae7de43"} Jan 26 14:32:24 crc kubenswrapper[4922]: I0126 14:32:24.255274 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerStarted","Data":"ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92"} Jan 26 14:32:24 crc kubenswrapper[4922]: I0126 14:32:24.273895 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.273875637 podStartE2EDuration="2.273875637s" podCreationTimestamp="2026-01-26 14:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:32:24.272161391 +0000 UTC m=+1361.474424183" watchObservedRunningTime="2026-01-26 14:32:24.273875637 +0000 UTC m=+1361.476138419" Jan 26 14:32:25 crc kubenswrapper[4922]: I0126 14:32:25.275208 4922 generic.go:334] "Generic (PLEG): container finished" podID="c07d78d3-3263-488b-b3a0-640f091363f2" containerID="ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92" exitCode=0 Jan 26 14:32:25 crc kubenswrapper[4922]: I0126 14:32:25.277346 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerDied","Data":"ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92"} Jan 26 14:32:25 crc kubenswrapper[4922]: I0126 14:32:25.590181 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 14:32:25 crc kubenswrapper[4922]: I0126 14:32:25.590266 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 14:32:27 crc kubenswrapper[4922]: I0126 14:32:27.305493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerStarted","Data":"4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a"} Jan 26 14:32:27 crc kubenswrapper[4922]: I0126 14:32:27.336897 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzrk6" podStartSLOduration=4.120264372 podStartE2EDuration="7.33688205s" podCreationTimestamp="2026-01-26 14:32:20 +0000 UTC" firstStartedPulling="2026-01-26 14:32:23.239829011 +0000 UTC m=+1360.442091823" lastFinishedPulling="2026-01-26 14:32:26.456446679 +0000 UTC m=+1363.658709501" observedRunningTime="2026-01-26 14:32:27.333752984 +0000 UTC m=+1364.536015786" watchObservedRunningTime="2026-01-26 14:32:27.33688205 +0000 UTC m=+1364.539144822" Jan 26 14:32:27 crc kubenswrapper[4922]: I0126 14:32:27.708234 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 14:32:28 crc kubenswrapper[4922]: I0126 14:32:28.520852 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:32:28 crc kubenswrapper[4922]: I0126 14:32:28.520906 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 14:32:29 crc kubenswrapper[4922]: I0126 14:32:29.539274 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec77602e-4cce-4d70-90ec-6d6adc5f6643" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:29 crc kubenswrapper[4922]: I0126 14:32:29.539337 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec77602e-4cce-4d70-90ec-6d6adc5f6643" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:30 crc kubenswrapper[4922]: I0126 14:32:30.590791 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 14:32:30 crc kubenswrapper[4922]: I0126 14:32:30.592204 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 14:32:31 crc kubenswrapper[4922]: I0126 14:32:31.234111 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:31 crc kubenswrapper[4922]: I0126 14:32:31.234166 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:31 crc kubenswrapper[4922]: I0126 14:32:31.605266 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d01bf414-0bdd-49f2-aa15-54f8ddb04d7b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:31 crc kubenswrapper[4922]: I0126 14:32:31.605307 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d01bf414-0bdd-49f2-aa15-54f8ddb04d7b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 14:32:32 crc kubenswrapper[4922]: I0126 14:32:32.286880 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzrk6" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="registry-server" probeResult="failure" output=< Jan 26 14:32:32 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:32:32 crc kubenswrapper[4922]: > Jan 26 14:32:32 crc kubenswrapper[4922]: I0126 14:32:32.325734 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 14:32:32 crc kubenswrapper[4922]: I0126 14:32:32.708433 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 14:32:32 crc kubenswrapper[4922]: I0126 14:32:32.773486 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 14:32:33 crc kubenswrapper[4922]: I0126 14:32:33.408746 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 14:32:38 crc kubenswrapper[4922]: I0126 14:32:38.533969 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 14:32:38 crc kubenswrapper[4922]: I0126 14:32:38.535109 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 14:32:38 crc kubenswrapper[4922]: I0126 14:32:38.537883 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 14:32:38 crc kubenswrapper[4922]: I0126 14:32:38.551651 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 14:32:39 crc kubenswrapper[4922]: I0126 14:32:39.433804 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 14:32:39 crc kubenswrapper[4922]: I0126 14:32:39.442978 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 14:32:40 crc kubenswrapper[4922]: I0126 14:32:40.595556 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 14:32:40 crc kubenswrapper[4922]: I0126 14:32:40.597172 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 14:32:40 crc kubenswrapper[4922]: I0126 14:32:40.603457 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.307918 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.308395 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.308459 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.309342 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fe8483d01fe17dae14bd575d394e895ec02b281bd5bc48e80a4af9b52b57371"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.309419 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://0fe8483d01fe17dae14bd575d394e895ec02b281bd5bc48e80a4af9b52b57371" gracePeriod=600 Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.329468 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.418322 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.478748 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="0fe8483d01fe17dae14bd575d394e895ec02b281bd5bc48e80a4af9b52b57371" exitCode=0 Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.483177 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"0fe8483d01fe17dae14bd575d394e895ec02b281bd5bc48e80a4af9b52b57371"} Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.483266 4922 scope.go:117] "RemoveContainer" containerID="579737a5aa8bb32a4f554c6e647711e28b3e50a7ec3de0bd2d82dee5d94940f2" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.495601 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 14:32:41 crc kubenswrapper[4922]: I0126 14:32:41.576115 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzrk6"] Jan 26 14:32:42 crc kubenswrapper[4922]: I0126 14:32:42.489563 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzrk6" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="registry-server" containerID="cri-o://4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a" gracePeriod=2 Jan 26 14:32:42 crc kubenswrapper[4922]: I0126 14:32:42.490800 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70"} Jan 26 14:32:42 crc kubenswrapper[4922]: I0126 14:32:42.981190 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.162403 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hjbh\" (UniqueName: \"kubernetes.io/projected/c07d78d3-3263-488b-b3a0-640f091363f2-kube-api-access-2hjbh\") pod \"c07d78d3-3263-488b-b3a0-640f091363f2\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.162482 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-catalog-content\") pod \"c07d78d3-3263-488b-b3a0-640f091363f2\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.162534 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-utilities\") pod \"c07d78d3-3263-488b-b3a0-640f091363f2\" (UID: \"c07d78d3-3263-488b-b3a0-640f091363f2\") " Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.163571 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-utilities" (OuterVolumeSpecName: "utilities") pod "c07d78d3-3263-488b-b3a0-640f091363f2" (UID: "c07d78d3-3263-488b-b3a0-640f091363f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.169155 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07d78d3-3263-488b-b3a0-640f091363f2-kube-api-access-2hjbh" (OuterVolumeSpecName: "kube-api-access-2hjbh") pod "c07d78d3-3263-488b-b3a0-640f091363f2" (UID: "c07d78d3-3263-488b-b3a0-640f091363f2"). InnerVolumeSpecName "kube-api-access-2hjbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.265889 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hjbh\" (UniqueName: \"kubernetes.io/projected/c07d78d3-3263-488b-b3a0-640f091363f2-kube-api-access-2hjbh\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.266308 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.292015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c07d78d3-3263-488b-b3a0-640f091363f2" (UID: "c07d78d3-3263-488b-b3a0-640f091363f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.368782 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07d78d3-3263-488b-b3a0-640f091363f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.502614 4922 generic.go:334] "Generic (PLEG): container finished" podID="c07d78d3-3263-488b-b3a0-640f091363f2" containerID="4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a" exitCode=0 Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.502706 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzrk6" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.502720 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerDied","Data":"4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a"} Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.503597 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzrk6" event={"ID":"c07d78d3-3263-488b-b3a0-640f091363f2","Type":"ContainerDied","Data":"27c6e49c6394926e2d0c997afb219cf8cc3d80d452d4f661b2d78f25a60dea66"} Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.503630 4922 scope.go:117] "RemoveContainer" containerID="4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.541131 4922 scope.go:117] "RemoveContainer" containerID="ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.549342 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzrk6"] Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.572571 4922 scope.go:117] "RemoveContainer" containerID="5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.572633 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzrk6"] Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.624390 4922 scope.go:117] "RemoveContainer" containerID="4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a" Jan 26 14:32:43 crc kubenswrapper[4922]: E0126 14:32:43.625113 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a\": container with ID starting with 4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a not found: ID does not exist" containerID="4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.625177 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a"} err="failed to get container status \"4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a\": rpc error: code = NotFound desc = could not find container \"4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a\": container with ID starting with 4d28b0835521ddc2ac0f5a687b1e089cc4f06b3abd2bcf26d7ce02ef9f26262a not found: ID does not exist" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.625212 4922 scope.go:117] "RemoveContainer" containerID="ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92" Jan 26 14:32:43 crc kubenswrapper[4922]: E0126 14:32:43.625928 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92\": container with ID starting with ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92 not found: ID does not exist" containerID="ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.625983 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92"} err="failed to get container status \"ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92\": rpc error: code = NotFound desc = could not find container \"ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92\": container with ID starting with ac0513717ac698cfc30e43f8b74a3a409206b503799a08cf437caa926635da92 not found: ID does not exist" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.626016 4922 scope.go:117] "RemoveContainer" containerID="5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe" Jan 26 14:32:43 crc kubenswrapper[4922]: E0126 14:32:43.626700 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe\": container with ID starting with 5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe not found: ID does not exist" containerID="5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe" Jan 26 14:32:43 crc kubenswrapper[4922]: I0126 14:32:43.626737 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe"} err="failed to get container status \"5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe\": rpc error: code = NotFound desc = could not find container \"5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe\": container with ID starting with 5ab63e5423637596124312b9bd5f05e18e6d786b81c7b9860e3d325da5e0d8fe not found: ID does not exist" Jan 26 14:32:45 crc kubenswrapper[4922]: I0126 14:32:45.112258 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" path="/var/lib/kubelet/pods/c07d78d3-3263-488b-b3a0-640f091363f2/volumes" Jan 26 14:32:50 crc kubenswrapper[4922]: I0126 14:32:50.316493 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:32:51 crc kubenswrapper[4922]: I0126 14:32:51.423110 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:32:54 crc kubenswrapper[4922]: I0126 14:32:54.196995 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="rabbitmq" containerID="cri-o://59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c" gracePeriod=604797 Jan 26 14:32:54 crc kubenswrapper[4922]: I0126 14:32:54.541887 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="rabbitmq" containerID="cri-o://e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1" gracePeriod=604797 Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.313244 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460737 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-pod-info\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460782 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-server-conf\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460813 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460838 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-plugins-conf\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460854 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-confd\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460870 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-config-data\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460937 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-erlang-cookie\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460965 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-tls\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.460985 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-erlang-cookie-secret\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.461030 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-plugins\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.461050 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbqjs\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-kube-api-access-wbqjs\") pod \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\" (UID: \"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.468041 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.470351 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.470799 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.472685 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.475198 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.479653 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.480946 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.481204 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-kube-api-access-wbqjs" (OuterVolumeSpecName: "kube-api-access-wbqjs") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "kube-api-access-wbqjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.501642 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-config-data" (OuterVolumeSpecName: "config-data") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563495 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563755 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbqjs\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-kube-api-access-wbqjs\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563767 4922 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563789 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563799 4922 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563809 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563817 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563825 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.563833 4922 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.575342 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.597201 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.658848 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.661088 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" (UID: "9b0a9cff-a23b-4c41-ac95-97e2b3532cc0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.665804 4922 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.665845 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.665857 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.693983 4922 generic.go:334] "Generic (PLEG): container finished" podID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerID="59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c" exitCode=0 Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.694047 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0","Type":"ContainerDied","Data":"59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c"} Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.694091 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b0a9cff-a23b-4c41-ac95-97e2b3532cc0","Type":"ContainerDied","Data":"f6ac1786adcb3ed7e825324dc80cf67ab7cfc03c5f8f5ebacdf136d0bff8707e"} Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.694108 4922 scope.go:117] "RemoveContainer" containerID="59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.694236 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.726103 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerID="e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1" exitCode=0 Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.726152 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e3ea763a-f09f-435f-b75d-69e3b9160943","Type":"ContainerDied","Data":"e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1"} Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.726184 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e3ea763a-f09f-435f-b75d-69e3b9160943","Type":"ContainerDied","Data":"9e48dcc7a919f738b7cc5f7e2e6d5dc6af1f476b92624cf7d3aa457499f98215"} Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.726262 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.730640 4922 scope.go:117] "RemoveContainer" containerID="2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767102 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3ea763a-f09f-435f-b75d-69e3b9160943-pod-info\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767166 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3ea763a-f09f-435f-b75d-69e3b9160943-erlang-cookie-secret\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767231 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-tls\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767256 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-plugins\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767291 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-erlang-cookie\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767400 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbw8h\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-kube-api-access-fbw8h\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-plugins-conf\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767484 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-server-conf\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767512 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-config-data\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767533 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.767581 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-confd\") pod \"e3ea763a-f09f-435f-b75d-69e3b9160943\" (UID: \"e3ea763a-f09f-435f-b75d-69e3b9160943\") " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.777408 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e3ea763a-f09f-435f-b75d-69e3b9160943-pod-info" (OuterVolumeSpecName: "pod-info") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.784275 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.784652 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.785249 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.791988 4922 scope.go:117] "RemoveContainer" containerID="59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c" Jan 26 14:32:56 crc kubenswrapper[4922]: E0126 14:32:56.795462 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c\": container with ID starting with 59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c not found: ID does not exist" containerID="59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.795507 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c"} err="failed to get container status \"59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c\": rpc error: code = NotFound desc = could not find container \"59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c\": container with ID starting with 59136e58a2b7a9985a8a42c6256ca630c5e44c7347c22b21d4cd709e4671cc7c not found: ID does not exist" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.795551 4922 scope.go:117] "RemoveContainer" containerID="2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754" Jan 26 14:32:56 crc kubenswrapper[4922]: E0126 14:32:56.798572 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754\": container with ID starting with 2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754 not found: ID does not exist" containerID="2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.798605 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754"} err="failed to get container status \"2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754\": rpc error: code = NotFound desc = could not find container \"2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754\": container with ID starting with 2541565838ec55d0cd2cbb38a72b3a34fbbf0087454cba56dcdd2dd1d09c4754 not found: ID does not exist" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.798623 4922 scope.go:117] "RemoveContainer" containerID="e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.821530 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ea763a-f09f-435f-b75d-69e3b9160943-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.824289 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.828465 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-kube-api-access-fbw8h" (OuterVolumeSpecName: "kube-api-access-fbw8h") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "kube-api-access-fbw8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.840335 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.843591 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.878912 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbw8h\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-kube-api-access-fbw8h\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.878966 4922 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.879009 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.879021 4922 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3ea763a-f09f-435f-b75d-69e3b9160943-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.879033 4922 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3ea763a-f09f-435f-b75d-69e3b9160943-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.879044 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.879054 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.879084 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.898227 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.931488 4922 scope.go:117] "RemoveContainer" containerID="e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.957369 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-server-conf" (OuterVolumeSpecName: "server-conf") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.980900 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-config-data" (OuterVolumeSpecName: "config-data") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:32:56 crc kubenswrapper[4922]: I0126 14:32:56.987229 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e3ea763a-f09f-435f-b75d-69e3b9160943" (UID: "e3ea763a-f09f-435f-b75d-69e3b9160943"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.015830 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016247 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="rabbitmq" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016258 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="rabbitmq" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016275 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="extract-utilities" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016281 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="extract-utilities" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016291 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="setup-container" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016297 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="setup-container" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016304 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="extract-content" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016311 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="extract-content" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016323 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="setup-container" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016329 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="setup-container" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016338 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="registry-server" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016344 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="registry-server" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.016360 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="rabbitmq" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016365 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="rabbitmq" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016528 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" containerName="rabbitmq" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016543 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07d78d3-3263-488b-b3a0-640f091363f2" containerName="registry-server" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.016563 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" containerName="rabbitmq" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.017588 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.027668 4922 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.027699 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3ea763a-f09f-435f-b75d-69e3b9160943-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.027709 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3ea763a-f09f-435f-b75d-69e3b9160943-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.032809 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.032978 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.033105 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qfw56" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.033183 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.033256 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.033261 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.033593 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.049391 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.051528 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.097505 4922 scope.go:117] "RemoveContainer" containerID="e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.098162 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1\": container with ID starting with e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1 not found: ID does not exist" containerID="e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.098201 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1"} err="failed to get container status \"e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1\": rpc error: code = NotFound desc = could not find container \"e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1\": container with ID starting with e652f33c2bf6eb1979dea6d44eee43baed379b93935a78a7d0c1cd69db8d19d1 not found: ID does not exist" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.098228 4922 scope.go:117] "RemoveContainer" containerID="e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f" Jan 26 14:32:57 crc kubenswrapper[4922]: E0126 14:32:57.099406 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f\": container with ID starting with e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f not found: ID does not exist" containerID="e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.099433 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f"} err="failed to get container status \"e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f\": rpc error: code = NotFound desc = could not find container \"e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f\": container with ID starting with e2b2ab8434c173ee5479044b65864644ee07bf255b1e7fc19fa49e23d9c8322f not found: ID does not exist" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.104740 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0a9cff-a23b-4c41-ac95-97e2b3532cc0" path="/var/lib/kubelet/pods/9b0a9cff-a23b-4c41-ac95-97e2b3532cc0/volumes" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129541 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129627 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129687 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129720 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129749 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129796 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129816 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129885 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129947 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.129972 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jbxq\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-kube-api-access-8jbxq\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.130085 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.145466 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.156106 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.179077 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.182417 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.185541 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.185743 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jszrh" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.187503 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.187639 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.187701 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.187506 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.187842 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.198179 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231623 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231665 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231714 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231745 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231770 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231814 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231830 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231873 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231910 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.231966 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jbxq\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-kube-api-access-8jbxq\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.232100 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.232168 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.232257 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.233792 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.236240 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.237008 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.238380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.242662 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.246626 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.247355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.250743 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jbxq\" (UniqueName: \"kubernetes.io/projected/e3a4bf42-9b24-473a-bca6-f81f1d0884fb-kube-api-access-8jbxq\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.279183 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e3a4bf42-9b24-473a-bca6-f81f1d0884fb\") " pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334100 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334168 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334242 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334395 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334577 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334616 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhnn\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-kube-api-access-mdhnn\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334695 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334810 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334847 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.334961 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.335004 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437076 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437122 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437143 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437164 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437193 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437226 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437305 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdhnn\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-kube-api-access-mdhnn\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437351 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437407 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.437452 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.438613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.438927 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.438917 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.439153 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.439240 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.439613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.443383 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.443607 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.445591 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.448044 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.451271 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.465827 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdhnn\" (UniqueName: \"kubernetes.io/projected/34b7c66d-87b0-4db4-aa8c-7dd19293e8fd-kube-api-access-mdhnn\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.493884 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.506837 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:32:57 crc kubenswrapper[4922]: I0126 14:32:57.917052 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 14:32:57 crc kubenswrapper[4922]: W0126 14:32:57.918679 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a4bf42_9b24_473a_bca6_f81f1d0884fb.slice/crio-aed1578f1ea08507b20f78dec57371117466b7de74ac5515d28a6e92468617db WatchSource:0}: Error finding container aed1578f1ea08507b20f78dec57371117466b7de74ac5515d28a6e92468617db: Status 404 returned error can't find the container with id aed1578f1ea08507b20f78dec57371117466b7de74ac5515d28a6e92468617db Jan 26 14:32:58 crc kubenswrapper[4922]: I0126 14:32:58.037893 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 14:32:58 crc kubenswrapper[4922]: W0126 14:32:58.039573 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b7c66d_87b0_4db4_aa8c_7dd19293e8fd.slice/crio-8eb04fe21a17b9ea273f0a657290d0ad3b3d46a5ed7cb35c7293891cd42b5cca WatchSource:0}: Error finding container 8eb04fe21a17b9ea273f0a657290d0ad3b3d46a5ed7cb35c7293891cd42b5cca: Status 404 returned error can't find the container with id 8eb04fe21a17b9ea273f0a657290d0ad3b3d46a5ed7cb35c7293891cd42b5cca Jan 26 14:32:58 crc kubenswrapper[4922]: I0126 14:32:58.755994 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e3a4bf42-9b24-473a-bca6-f81f1d0884fb","Type":"ContainerStarted","Data":"aed1578f1ea08507b20f78dec57371117466b7de74ac5515d28a6e92468617db"} Jan 26 14:32:58 crc kubenswrapper[4922]: I0126 14:32:58.757001 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd","Type":"ContainerStarted","Data":"8eb04fe21a17b9ea273f0a657290d0ad3b3d46a5ed7cb35c7293891cd42b5cca"} Jan 26 14:32:59 crc kubenswrapper[4922]: I0126 14:32:59.116455 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ea763a-f09f-435f-b75d-69e3b9160943" path="/var/lib/kubelet/pods/e3ea763a-f09f-435f-b75d-69e3b9160943/volumes" Jan 26 14:33:00 crc kubenswrapper[4922]: I0126 14:33:00.785136 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd","Type":"ContainerStarted","Data":"7b7d12433c92f2aadd5372c9f75ec0a1993b68814d26d53e292f5bcbbccb71d5"} Jan 26 14:33:00 crc kubenswrapper[4922]: I0126 14:33:00.789464 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e3a4bf42-9b24-473a-bca6-f81f1d0884fb","Type":"ContainerStarted","Data":"23ef162c4cbf97da456f5ed0cd039b59623b1da61de00b7b092ff9f420639448"} Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.027034 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57fc75595-swsxm"] Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.029491 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.033751 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.044036 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fc75595-swsxm"] Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-sb\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095447 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-openstack-edpm-ipam\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095542 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-svc\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095624 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-swift-storage-0\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095703 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw28p\" (UniqueName: \"kubernetes.io/projected/a68cd785-7533-4d0d-83de-bd8285f883e3-kube-api-access-fw28p\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095842 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-nb\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.095990 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-config\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.197510 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw28p\" (UniqueName: \"kubernetes.io/projected/a68cd785-7533-4d0d-83de-bd8285f883e3-kube-api-access-fw28p\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.197643 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-nb\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.197765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-config\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.197838 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-sb\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.197882 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-openstack-edpm-ipam\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.197962 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-svc\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.198162 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-swift-storage-0\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.198679 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-config\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.198679 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-nb\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.198878 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-sb\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.198996 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-openstack-edpm-ipam\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.198995 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-svc\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.199865 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-swift-storage-0\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.215307 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw28p\" (UniqueName: \"kubernetes.io/projected/a68cd785-7533-4d0d-83de-bd8285f883e3-kube-api-access-fw28p\") pod \"dnsmasq-dns-57fc75595-swsxm\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.375523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.868649 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fc75595-swsxm"] Jan 26 14:33:09 crc kubenswrapper[4922]: I0126 14:33:09.883691 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fc75595-swsxm" event={"ID":"a68cd785-7533-4d0d-83de-bd8285f883e3","Type":"ContainerStarted","Data":"111192c38b3e8b0ba53fe9f371bf071925532263ec73c84c99dd117f06949f7d"} Jan 26 14:33:10 crc kubenswrapper[4922]: I0126 14:33:10.894428 4922 generic.go:334] "Generic (PLEG): container finished" podID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerID="a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c" exitCode=0 Jan 26 14:33:10 crc kubenswrapper[4922]: I0126 14:33:10.894471 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fc75595-swsxm" event={"ID":"a68cd785-7533-4d0d-83de-bd8285f883e3","Type":"ContainerDied","Data":"a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c"} Jan 26 14:33:11 crc kubenswrapper[4922]: I0126 14:33:11.913385 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fc75595-swsxm" event={"ID":"a68cd785-7533-4d0d-83de-bd8285f883e3","Type":"ContainerStarted","Data":"3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698"} Jan 26 14:33:12 crc kubenswrapper[4922]: I0126 14:33:12.921089 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:12 crc kubenswrapper[4922]: I0126 14:33:12.951113 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57fc75595-swsxm" podStartSLOduration=3.951089847 podStartE2EDuration="3.951089847s" podCreationTimestamp="2026-01-26 14:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:33:12.943581904 +0000 UTC m=+1410.145844676" watchObservedRunningTime="2026-01-26 14:33:12.951089847 +0000 UTC m=+1410.153352629" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.377341 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.483728 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55556d9745-knxv2"] Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.484256 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55556d9745-knxv2" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerName="dnsmasq-dns" containerID="cri-o://216636d782b5a086696b0ca146e89125c4544c9143f66567db02cc32f0b7950e" gracePeriod=10 Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.669330 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d579bbd7-jvjv8"] Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.671161 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.684775 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d579bbd7-jvjv8"] Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.826656 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn99m\" (UniqueName: \"kubernetes.io/projected/deee4578-fc2b-4162-b39e-012e9b6b2e8a-kube-api-access-zn99m\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.826740 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-openstack-edpm-ipam\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.826910 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-ovsdbserver-nb\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.827024 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-ovsdbserver-sb\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.827090 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-dns-swift-storage-0\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.827130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-dns-svc\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.827151 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-config\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.929058 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-ovsdbserver-nb\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.929147 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-ovsdbserver-sb\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.929181 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-dns-swift-storage-0\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.929205 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-dns-svc\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.929223 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-config\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.929266 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn99m\" (UniqueName: \"kubernetes.io/projected/deee4578-fc2b-4162-b39e-012e9b6b2e8a-kube-api-access-zn99m\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.930498 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-dns-swift-storage-0\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.930614 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-openstack-edpm-ipam\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.930632 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-ovsdbserver-sb\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.930652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-ovsdbserver-nb\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.931227 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-config\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.931805 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-dns-svc\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.931822 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/deee4578-fc2b-4162-b39e-012e9b6b2e8a-openstack-edpm-ipam\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:19 crc kubenswrapper[4922]: I0126 14:33:19.951377 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn99m\" (UniqueName: \"kubernetes.io/projected/deee4578-fc2b-4162-b39e-012e9b6b2e8a-kube-api-access-zn99m\") pod \"dnsmasq-dns-78d579bbd7-jvjv8\" (UID: \"deee4578-fc2b-4162-b39e-012e9b6b2e8a\") " pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.000009 4922 generic.go:334] "Generic (PLEG): container finished" podID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerID="216636d782b5a086696b0ca146e89125c4544c9143f66567db02cc32f0b7950e" exitCode=0 Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.000056 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55556d9745-knxv2" event={"ID":"0a47fca1-5b0a-41c4-a75a-15e005e0e385","Type":"ContainerDied","Data":"216636d782b5a086696b0ca146e89125c4544c9143f66567db02cc32f0b7950e"} Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.000125 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55556d9745-knxv2" event={"ID":"0a47fca1-5b0a-41c4-a75a-15e005e0e385","Type":"ContainerDied","Data":"c81cd23c6c7721850e49e426f4cae55b0f4d37539e6fc3a29ef098ed0352f73f"} Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.000139 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81cd23c6c7721850e49e426f4cae55b0f4d37539e6fc3a29ef098ed0352f73f" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.003134 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.106433 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.237764 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0a47fca1-5b0a-41c4-a75a-15e005e0e385-kube-api-access-mpwmw\") pod \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.237827 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-config\") pod \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.237866 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-swift-storage-0\") pod \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.237987 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-svc\") pod \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.238041 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-sb\") pod \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.238103 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-nb\") pod \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\" (UID: \"0a47fca1-5b0a-41c4-a75a-15e005e0e385\") " Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.243695 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a47fca1-5b0a-41c4-a75a-15e005e0e385-kube-api-access-mpwmw" (OuterVolumeSpecName: "kube-api-access-mpwmw") pod "0a47fca1-5b0a-41c4-a75a-15e005e0e385" (UID: "0a47fca1-5b0a-41c4-a75a-15e005e0e385"). InnerVolumeSpecName "kube-api-access-mpwmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.287144 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-config" (OuterVolumeSpecName: "config") pod "0a47fca1-5b0a-41c4-a75a-15e005e0e385" (UID: "0a47fca1-5b0a-41c4-a75a-15e005e0e385"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.311964 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a47fca1-5b0a-41c4-a75a-15e005e0e385" (UID: "0a47fca1-5b0a-41c4-a75a-15e005e0e385"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.313129 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a47fca1-5b0a-41c4-a75a-15e005e0e385" (UID: "0a47fca1-5b0a-41c4-a75a-15e005e0e385"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.319865 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a47fca1-5b0a-41c4-a75a-15e005e0e385" (UID: "0a47fca1-5b0a-41c4-a75a-15e005e0e385"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.322173 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a47fca1-5b0a-41c4-a75a-15e005e0e385" (UID: "0a47fca1-5b0a-41c4-a75a-15e005e0e385"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.341401 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.341449 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.341465 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwmw\" (UniqueName: \"kubernetes.io/projected/0a47fca1-5b0a-41c4-a75a-15e005e0e385-kube-api-access-mpwmw\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.341479 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.341489 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.341498 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a47fca1-5b0a-41c4-a75a-15e005e0e385-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:20 crc kubenswrapper[4922]: I0126 14:33:20.447199 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d579bbd7-jvjv8"] Jan 26 14:33:21 crc kubenswrapper[4922]: I0126 14:33:21.009801 4922 generic.go:334] "Generic (PLEG): container finished" podID="deee4578-fc2b-4162-b39e-012e9b6b2e8a" containerID="649a272ccf1ab0404f368b935190348c6c198ec4b0dd7455b217e24f78f843c2" exitCode=0 Jan 26 14:33:21 crc kubenswrapper[4922]: I0126 14:33:21.010134 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:33:21 crc kubenswrapper[4922]: I0126 14:33:21.009966 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" event={"ID":"deee4578-fc2b-4162-b39e-012e9b6b2e8a","Type":"ContainerDied","Data":"649a272ccf1ab0404f368b935190348c6c198ec4b0dd7455b217e24f78f843c2"} Jan 26 14:33:21 crc kubenswrapper[4922]: I0126 14:33:21.010333 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" event={"ID":"deee4578-fc2b-4162-b39e-012e9b6b2e8a","Type":"ContainerStarted","Data":"0e4359b2deec42cf8a07eae1fc4346a1d386d3f7b11a53224c288f90ed2353dd"} Jan 26 14:33:22 crc kubenswrapper[4922]: I0126 14:33:22.025135 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" event={"ID":"deee4578-fc2b-4162-b39e-012e9b6b2e8a","Type":"ContainerStarted","Data":"5afe46a732642ae20fc28a109313efb9296094260a65e337a13e1f32b17d08c0"} Jan 26 14:33:22 crc kubenswrapper[4922]: I0126 14:33:22.025477 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:22 crc kubenswrapper[4922]: I0126 14:33:22.056826 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" podStartSLOduration=3.056806764 podStartE2EDuration="3.056806764s" podCreationTimestamp="2026-01-26 14:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:33:22.045324013 +0000 UTC m=+1419.247586795" watchObservedRunningTime="2026-01-26 14:33:22.056806764 +0000 UTC m=+1419.259069546" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.004323 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d579bbd7-jvjv8" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.089868 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fc75595-swsxm"] Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.090818 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57fc75595-swsxm" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerName="dnsmasq-dns" containerID="cri-o://3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698" gracePeriod=10 Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.623334 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771404 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-config\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771634 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-svc\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771664 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw28p\" (UniqueName: \"kubernetes.io/projected/a68cd785-7533-4d0d-83de-bd8285f883e3-kube-api-access-fw28p\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771739 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-sb\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771802 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-swift-storage-0\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771836 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-openstack-edpm-ipam\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.771870 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-nb\") pod \"a68cd785-7533-4d0d-83de-bd8285f883e3\" (UID: \"a68cd785-7533-4d0d-83de-bd8285f883e3\") " Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.777472 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a68cd785-7533-4d0d-83de-bd8285f883e3-kube-api-access-fw28p" (OuterVolumeSpecName: "kube-api-access-fw28p") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "kube-api-access-fw28p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.828798 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.828811 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.843510 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-config" (OuterVolumeSpecName: "config") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.848810 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.855945 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.859345 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a68cd785-7533-4d0d-83de-bd8285f883e3" (UID: "a68cd785-7533-4d0d-83de-bd8285f883e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874558 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874598 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874616 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874634 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874645 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874657 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw28p\" (UniqueName: \"kubernetes.io/projected/a68cd785-7533-4d0d-83de-bd8285f883e3-kube-api-access-fw28p\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:30 crc kubenswrapper[4922]: I0126 14:33:30.874671 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a68cd785-7533-4d0d-83de-bd8285f883e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.138730 4922 generic.go:334] "Generic (PLEG): container finished" podID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerID="3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698" exitCode=0 Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.138813 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fc75595-swsxm" event={"ID":"a68cd785-7533-4d0d-83de-bd8285f883e3","Type":"ContainerDied","Data":"3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698"} Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.138863 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fc75595-swsxm" event={"ID":"a68cd785-7533-4d0d-83de-bd8285f883e3","Type":"ContainerDied","Data":"111192c38b3e8b0ba53fe9f371bf071925532263ec73c84c99dd117f06949f7d"} Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.138899 4922 scope.go:117] "RemoveContainer" containerID="3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.139214 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fc75595-swsxm" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.175257 4922 scope.go:117] "RemoveContainer" containerID="a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.185114 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fc75595-swsxm"] Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.199127 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57fc75595-swsxm"] Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.213030 4922 scope.go:117] "RemoveContainer" containerID="3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698" Jan 26 14:33:31 crc kubenswrapper[4922]: E0126 14:33:31.213595 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698\": container with ID starting with 3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698 not found: ID does not exist" containerID="3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.213640 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698"} err="failed to get container status \"3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698\": rpc error: code = NotFound desc = could not find container \"3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698\": container with ID starting with 3ef91b28f24042f2e02882470e932c764f9c843abacd495a018c767a4cc33698 not found: ID does not exist" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.213665 4922 scope.go:117] "RemoveContainer" containerID="a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c" Jan 26 14:33:31 crc kubenswrapper[4922]: E0126 14:33:31.214123 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c\": container with ID starting with a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c not found: ID does not exist" containerID="a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c" Jan 26 14:33:31 crc kubenswrapper[4922]: I0126 14:33:31.214184 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c"} err="failed to get container status \"a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c\": rpc error: code = NotFound desc = could not find container \"a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c\": container with ID starting with a6c7836e5f8221eb4b99ba63d6a86252603c874d17304dfbf80335bcff77cf5c not found: ID does not exist" Jan 26 14:33:33 crc kubenswrapper[4922]: I0126 14:33:33.115639 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" path="/var/lib/kubelet/pods/a68cd785-7533-4d0d-83de-bd8285f883e3/volumes" Jan 26 14:33:33 crc kubenswrapper[4922]: I0126 14:33:33.169144 4922 generic.go:334] "Generic (PLEG): container finished" podID="34b7c66d-87b0-4db4-aa8c-7dd19293e8fd" containerID="7b7d12433c92f2aadd5372c9f75ec0a1993b68814d26d53e292f5bcbbccb71d5" exitCode=0 Jan 26 14:33:33 crc kubenswrapper[4922]: I0126 14:33:33.169199 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd","Type":"ContainerDied","Data":"7b7d12433c92f2aadd5372c9f75ec0a1993b68814d26d53e292f5bcbbccb71d5"} Jan 26 14:33:34 crc kubenswrapper[4922]: I0126 14:33:34.181542 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3a4bf42-9b24-473a-bca6-f81f1d0884fb" containerID="23ef162c4cbf97da456f5ed0cd039b59623b1da61de00b7b092ff9f420639448" exitCode=0 Jan 26 14:33:34 crc kubenswrapper[4922]: I0126 14:33:34.181643 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e3a4bf42-9b24-473a-bca6-f81f1d0884fb","Type":"ContainerDied","Data":"23ef162c4cbf97da456f5ed0cd039b59623b1da61de00b7b092ff9f420639448"} Jan 26 14:33:34 crc kubenswrapper[4922]: I0126 14:33:34.186392 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34b7c66d-87b0-4db4-aa8c-7dd19293e8fd","Type":"ContainerStarted","Data":"5ca1e8915b916797c0d654b767fca1568009895c865cfd8469463d5394d2b12c"} Jan 26 14:33:34 crc kubenswrapper[4922]: I0126 14:33:34.186623 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:33:34 crc kubenswrapper[4922]: I0126 14:33:34.255821 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.255801675 podStartE2EDuration="37.255801675s" podCreationTimestamp="2026-01-26 14:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:33:34.241859146 +0000 UTC m=+1431.444121918" watchObservedRunningTime="2026-01-26 14:33:34.255801675 +0000 UTC m=+1431.458064447" Jan 26 14:33:35 crc kubenswrapper[4922]: I0126 14:33:35.196237 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e3a4bf42-9b24-473a-bca6-f81f1d0884fb","Type":"ContainerStarted","Data":"e2b23c5fa939c359b4ca9ad6f0cc2818d819a769f00c08962f6457e1e4efc3d8"} Jan 26 14:33:35 crc kubenswrapper[4922]: I0126 14:33:35.196598 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 14:33:35 crc kubenswrapper[4922]: I0126 14:33:35.226518 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.226497967 podStartE2EDuration="39.226497967s" podCreationTimestamp="2026-01-26 14:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:33:35.216861044 +0000 UTC m=+1432.419123816" watchObservedRunningTime="2026-01-26 14:33:35.226497967 +0000 UTC m=+1432.428760739" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.821521 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc"] Jan 26 14:33:43 crc kubenswrapper[4922]: E0126 14:33:43.823524 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerName="dnsmasq-dns" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.823621 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerName="dnsmasq-dns" Jan 26 14:33:43 crc kubenswrapper[4922]: E0126 14:33:43.823721 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerName="init" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.823773 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerName="init" Jan 26 14:33:43 crc kubenswrapper[4922]: E0126 14:33:43.823832 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerName="init" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.823884 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerName="init" Jan 26 14:33:43 crc kubenswrapper[4922]: E0126 14:33:43.823949 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerName="dnsmasq-dns" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.824005 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerName="dnsmasq-dns" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.824258 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" containerName="dnsmasq-dns" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.824332 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a68cd785-7533-4d0d-83de-bd8285f883e3" containerName="dnsmasq-dns" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.825051 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.827776 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.828690 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.830556 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.840431 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.850740 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc"] Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.937766 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.938048 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.938329 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:43 crc kubenswrapper[4922]: I0126 14:33:43.938491 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwrc\" (UniqueName: \"kubernetes.io/projected/650a3e49-f342-4b32-940a-2f64bdb45fb3-kube-api-access-6cwrc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.040809 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.040887 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.040999 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cwrc\" (UniqueName: \"kubernetes.io/projected/650a3e49-f342-4b32-940a-2f64bdb45fb3-kube-api-access-6cwrc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.041104 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.048352 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.059171 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.061846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.063727 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cwrc\" (UniqueName: \"kubernetes.io/projected/650a3e49-f342-4b32-940a-2f64bdb45fb3-kube-api-access-6cwrc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.149741 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:33:44 crc kubenswrapper[4922]: I0126 14:33:44.552107 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc"] Jan 26 14:33:45 crc kubenswrapper[4922]: I0126 14:33:45.310453 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" event={"ID":"650a3e49-f342-4b32-940a-2f64bdb45fb3","Type":"ContainerStarted","Data":"b37dde6ee4a322a335924188ad5389b6d05401d35ed84c90298b77edfdc3e17a"} Jan 26 14:33:47 crc kubenswrapper[4922]: I0126 14:33:47.455306 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 14:33:47 crc kubenswrapper[4922]: I0126 14:33:47.509590 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 14:33:51 crc kubenswrapper[4922]: I0126 14:33:51.167149 4922 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0a47fca1-5b0a-41c4-a75a-15e005e0e385"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0a47fca1-5b0a-41c4-a75a-15e005e0e385] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0a47fca1_5b0a_41c4_a75a_15e005e0e385.slice" Jan 26 14:33:51 crc kubenswrapper[4922]: E0126 14:33:51.167534 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0a47fca1-5b0a-41c4-a75a-15e005e0e385] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0a47fca1-5b0a-41c4-a75a-15e005e0e385] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0a47fca1_5b0a_41c4_a75a_15e005e0e385.slice" pod="openstack/dnsmasq-dns-55556d9745-knxv2" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" Jan 26 14:33:51 crc kubenswrapper[4922]: I0126 14:33:51.373857 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55556d9745-knxv2" Jan 26 14:33:51 crc kubenswrapper[4922]: I0126 14:33:51.397393 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55556d9745-knxv2"] Jan 26 14:33:51 crc kubenswrapper[4922]: I0126 14:33:51.405576 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55556d9745-knxv2"] Jan 26 14:33:53 crc kubenswrapper[4922]: I0126 14:33:53.105459 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a47fca1-5b0a-41c4-a75a-15e005e0e385" path="/var/lib/kubelet/pods/0a47fca1-5b0a-41c4-a75a-15e005e0e385/volumes" Jan 26 14:33:56 crc kubenswrapper[4922]: I0126 14:33:56.765622 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="205c6bf6-b838-4bea-9cf8-df9fe42bd53f" containerName="galera" probeResult="failure" output="command timed out" Jan 26 14:33:56 crc kubenswrapper[4922]: I0126 14:33:56.765850 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="205c6bf6-b838-4bea-9cf8-df9fe42bd53f" containerName="galera" probeResult="failure" output="command timed out" Jan 26 14:33:58 crc kubenswrapper[4922]: E0126 14:33:58.759385 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 26 14:33:58 crc kubenswrapper[4922]: E0126 14:33:58.760119 4922 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 14:33:58 crc kubenswrapper[4922]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 26 14:33:58 crc kubenswrapper[4922]: - hosts: all Jan 26 14:33:58 crc kubenswrapper[4922]: strategy: linear Jan 26 14:33:58 crc kubenswrapper[4922]: tasks: Jan 26 14:33:58 crc kubenswrapper[4922]: - name: Enable podified-repos Jan 26 14:33:58 crc kubenswrapper[4922]: become: true Jan 26 14:33:58 crc kubenswrapper[4922]: ansible.builtin.shell: | Jan 26 14:33:58 crc kubenswrapper[4922]: set -euxo pipefail Jan 26 14:33:58 crc kubenswrapper[4922]: pushd /var/tmp Jan 26 14:33:58 crc kubenswrapper[4922]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 26 14:33:58 crc kubenswrapper[4922]: pushd repo-setup-main Jan 26 14:33:58 crc kubenswrapper[4922]: python3 -m venv ./venv Jan 26 14:33:58 crc kubenswrapper[4922]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 26 14:33:58 crc kubenswrapper[4922]: ./venv/bin/repo-setup current-podified -b antelope Jan 26 14:33:58 crc kubenswrapper[4922]: popd Jan 26 14:33:58 crc kubenswrapper[4922]: rm -rf repo-setup-main Jan 26 14:33:58 crc kubenswrapper[4922]: Jan 26 14:33:58 crc kubenswrapper[4922]: Jan 26 14:33:58 crc kubenswrapper[4922]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 26 14:33:58 crc kubenswrapper[4922]: edpm_override_hosts: openstack-edpm-ipam Jan 26 14:33:58 crc kubenswrapper[4922]: edpm_service_type: repo-setup Jan 26 14:33:58 crc kubenswrapper[4922]: Jan 26 14:33:58 crc kubenswrapper[4922]: Jan 26 14:33:58 crc kubenswrapper[4922]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cwrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc_openstack(650a3e49-f342-4b32-940a-2f64bdb45fb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 26 14:33:58 crc kubenswrapper[4922]: > logger="UnhandledError" Jan 26 14:33:58 crc kubenswrapper[4922]: E0126 14:33:58.762265 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" podUID="650a3e49-f342-4b32-940a-2f64bdb45fb3" Jan 26 14:33:59 crc kubenswrapper[4922]: E0126 14:33:59.459572 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" podUID="650a3e49-f342-4b32-940a-2f64bdb45fb3" Jan 26 14:34:11 crc kubenswrapper[4922]: I0126 14:34:11.602485 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" event={"ID":"650a3e49-f342-4b32-940a-2f64bdb45fb3","Type":"ContainerStarted","Data":"7950233d05a1016bb202855af392856c5a64f97ca914e91187b133a2778d190e"} Jan 26 14:34:13 crc kubenswrapper[4922]: I0126 14:34:13.388861 4922 scope.go:117] "RemoveContainer" containerID="27f01f1b0fc7c2cb0d45acaf162ee20f5cdacea5e8b4eac03e1be1c62ab89d3d" Jan 26 14:34:13 crc kubenswrapper[4922]: I0126 14:34:13.434367 4922 scope.go:117] "RemoveContainer" containerID="58b9816cb6028c07e4d32713a778cf725474dc6dcc68f36e368984c76cde6237" Jan 26 14:34:13 crc kubenswrapper[4922]: I0126 14:34:13.497560 4922 scope.go:117] "RemoveContainer" containerID="369c04c6f7f90ae2564ca2a53a473041f8373cf8cdbcf61758c40bce5d94a99e" Jan 26 14:34:13 crc kubenswrapper[4922]: I0126 14:34:13.582866 4922 scope.go:117] "RemoveContainer" containerID="638bb3703e0410b6015018709f7753c34e6d312ba6aa1892b9ca9a33c0a68957" Jan 26 14:34:13 crc kubenswrapper[4922]: I0126 14:34:13.627009 4922 scope.go:117] "RemoveContainer" containerID="fd617cfc26cf40afb118fea46b2780dbc5f1237b667542be09c2d22405c07aa3" Jan 26 14:34:26 crc kubenswrapper[4922]: I0126 14:34:26.856493 4922 generic.go:334] "Generic (PLEG): container finished" podID="650a3e49-f342-4b32-940a-2f64bdb45fb3" containerID="7950233d05a1016bb202855af392856c5a64f97ca914e91187b133a2778d190e" exitCode=0 Jan 26 14:34:26 crc kubenswrapper[4922]: I0126 14:34:26.856594 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" event={"ID":"650a3e49-f342-4b32-940a-2f64bdb45fb3","Type":"ContainerDied","Data":"7950233d05a1016bb202855af392856c5a64f97ca914e91187b133a2778d190e"} Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.356794 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.480433 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cwrc\" (UniqueName: \"kubernetes.io/projected/650a3e49-f342-4b32-940a-2f64bdb45fb3-kube-api-access-6cwrc\") pod \"650a3e49-f342-4b32-940a-2f64bdb45fb3\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.480558 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-inventory\") pod \"650a3e49-f342-4b32-940a-2f64bdb45fb3\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.480726 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-ssh-key-openstack-edpm-ipam\") pod \"650a3e49-f342-4b32-940a-2f64bdb45fb3\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.480787 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-repo-setup-combined-ca-bundle\") pod \"650a3e49-f342-4b32-940a-2f64bdb45fb3\" (UID: \"650a3e49-f342-4b32-940a-2f64bdb45fb3\") " Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.486281 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "650a3e49-f342-4b32-940a-2f64bdb45fb3" (UID: "650a3e49-f342-4b32-940a-2f64bdb45fb3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.487023 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650a3e49-f342-4b32-940a-2f64bdb45fb3-kube-api-access-6cwrc" (OuterVolumeSpecName: "kube-api-access-6cwrc") pod "650a3e49-f342-4b32-940a-2f64bdb45fb3" (UID: "650a3e49-f342-4b32-940a-2f64bdb45fb3"). InnerVolumeSpecName "kube-api-access-6cwrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.512386 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-inventory" (OuterVolumeSpecName: "inventory") pod "650a3e49-f342-4b32-940a-2f64bdb45fb3" (UID: "650a3e49-f342-4b32-940a-2f64bdb45fb3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.521956 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "650a3e49-f342-4b32-940a-2f64bdb45fb3" (UID: "650a3e49-f342-4b32-940a-2f64bdb45fb3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.584090 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cwrc\" (UniqueName: \"kubernetes.io/projected/650a3e49-f342-4b32-940a-2f64bdb45fb3-kube-api-access-6cwrc\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.584147 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.584168 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.584188 4922 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650a3e49-f342-4b32-940a-2f64bdb45fb3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.883667 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" event={"ID":"650a3e49-f342-4b32-940a-2f64bdb45fb3","Type":"ContainerDied","Data":"b37dde6ee4a322a335924188ad5389b6d05401d35ed84c90298b77edfdc3e17a"} Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.883731 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37dde6ee4a322a335924188ad5389b6d05401d35ed84c90298b77edfdc3e17a" Jan 26 14:34:28 crc kubenswrapper[4922]: I0126 14:34:28.883779 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.025428 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l"] Jan 26 14:34:29 crc kubenswrapper[4922]: E0126 14:34:29.025999 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650a3e49-f342-4b32-940a-2f64bdb45fb3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.026028 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="650a3e49-f342-4b32-940a-2f64bdb45fb3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.026347 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="650a3e49-f342-4b32-940a-2f64bdb45fb3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.027216 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.031673 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.032104 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.032103 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.042191 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.050486 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l"] Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.102854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.102894 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fp7f\" (UniqueName: \"kubernetes.io/projected/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-kube-api-access-5fp7f\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.102915 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.204901 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.205289 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fp7f\" (UniqueName: \"kubernetes.io/projected/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-kube-api-access-5fp7f\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.205329 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.210465 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.213760 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.227335 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fp7f\" (UniqueName: \"kubernetes.io/projected/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-kube-api-access-5fp7f\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l759l\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.353732 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:29 crc kubenswrapper[4922]: I0126 14:34:29.941802 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l"] Jan 26 14:34:30 crc kubenswrapper[4922]: I0126 14:34:30.911094 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" event={"ID":"ed501c23-2119-4ef8-9d37-776d3a5c5d6e","Type":"ContainerStarted","Data":"14573c3285d5c6e2a0201aadc31883f2b6dbf1b75fcf310c27745c9153dad528"} Jan 26 14:34:30 crc kubenswrapper[4922]: I0126 14:34:30.911362 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" event={"ID":"ed501c23-2119-4ef8-9d37-776d3a5c5d6e","Type":"ContainerStarted","Data":"00da0dbc36b269a86a8e49e4d93d3a64768ad38b7d58f7c0ba231bfc92fba334"} Jan 26 14:34:30 crc kubenswrapper[4922]: I0126 14:34:30.935175 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" podStartSLOduration=2.474370479 podStartE2EDuration="2.935159912s" podCreationTimestamp="2026-01-26 14:34:28 +0000 UTC" firstStartedPulling="2026-01-26 14:34:29.935137684 +0000 UTC m=+1487.137400456" lastFinishedPulling="2026-01-26 14:34:30.395927077 +0000 UTC m=+1487.598189889" observedRunningTime="2026-01-26 14:34:30.932689894 +0000 UTC m=+1488.134952666" watchObservedRunningTime="2026-01-26 14:34:30.935159912 +0000 UTC m=+1488.137422684" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.446553 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j64lv"] Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.451720 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.459435 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j64lv"] Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.499428 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-catalog-content\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.499498 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-utilities\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.499822 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjwf\" (UniqueName: \"kubernetes.io/projected/4c2cff58-2576-44b0-aa67-e04dc492d191-kube-api-access-7vjwf\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.601934 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjwf\" (UniqueName: \"kubernetes.io/projected/4c2cff58-2576-44b0-aa67-e04dc492d191-kube-api-access-7vjwf\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.602191 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-catalog-content\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.602754 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-catalog-content\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.602840 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-utilities\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.603182 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-utilities\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.628960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjwf\" (UniqueName: \"kubernetes.io/projected/4c2cff58-2576-44b0-aa67-e04dc492d191-kube-api-access-7vjwf\") pod \"redhat-marketplace-j64lv\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.823095 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.964427 4922 generic.go:334] "Generic (PLEG): container finished" podID="ed501c23-2119-4ef8-9d37-776d3a5c5d6e" containerID="14573c3285d5c6e2a0201aadc31883f2b6dbf1b75fcf310c27745c9153dad528" exitCode=0 Jan 26 14:34:33 crc kubenswrapper[4922]: I0126 14:34:33.964558 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" event={"ID":"ed501c23-2119-4ef8-9d37-776d3a5c5d6e","Type":"ContainerDied","Data":"14573c3285d5c6e2a0201aadc31883f2b6dbf1b75fcf310c27745c9153dad528"} Jan 26 14:34:34 crc kubenswrapper[4922]: I0126 14:34:34.192657 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j64lv"] Jan 26 14:34:34 crc kubenswrapper[4922]: I0126 14:34:34.976613 4922 generic.go:334] "Generic (PLEG): container finished" podID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerID="57db4cf20cd11c6926e5655811790eabf7a30849755847713c75f25ec1494bbc" exitCode=0 Jan 26 14:34:34 crc kubenswrapper[4922]: I0126 14:34:34.976824 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerDied","Data":"57db4cf20cd11c6926e5655811790eabf7a30849755847713c75f25ec1494bbc"} Jan 26 14:34:34 crc kubenswrapper[4922]: I0126 14:34:34.977206 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerStarted","Data":"e297642e09502c19a5331c5f61d2382b2a477a1b4ec039a7b16a58a4ac3c25d4"} Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.416214 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.560521 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-ssh-key-openstack-edpm-ipam\") pod \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.560627 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-inventory\") pod \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.560789 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fp7f\" (UniqueName: \"kubernetes.io/projected/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-kube-api-access-5fp7f\") pod \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\" (UID: \"ed501c23-2119-4ef8-9d37-776d3a5c5d6e\") " Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.565327 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-kube-api-access-5fp7f" (OuterVolumeSpecName: "kube-api-access-5fp7f") pod "ed501c23-2119-4ef8-9d37-776d3a5c5d6e" (UID: "ed501c23-2119-4ef8-9d37-776d3a5c5d6e"). InnerVolumeSpecName "kube-api-access-5fp7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.601645 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-inventory" (OuterVolumeSpecName: "inventory") pod "ed501c23-2119-4ef8-9d37-776d3a5c5d6e" (UID: "ed501c23-2119-4ef8-9d37-776d3a5c5d6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.605502 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed501c23-2119-4ef8-9d37-776d3a5c5d6e" (UID: "ed501c23-2119-4ef8-9d37-776d3a5c5d6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.663013 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fp7f\" (UniqueName: \"kubernetes.io/projected/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-kube-api-access-5fp7f\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.663386 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.663411 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed501c23-2119-4ef8-9d37-776d3a5c5d6e-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.990793 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerStarted","Data":"6ad6e961c41d7b81a2e1964c67fbd2fa4da314801c3970ea7ea669a8ece34e47"} Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.997863 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.998255 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l759l" event={"ID":"ed501c23-2119-4ef8-9d37-776d3a5c5d6e","Type":"ContainerDied","Data":"00da0dbc36b269a86a8e49e4d93d3a64768ad38b7d58f7c0ba231bfc92fba334"} Jan 26 14:34:35 crc kubenswrapper[4922]: I0126 14:34:35.998294 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00da0dbc36b269a86a8e49e4d93d3a64768ad38b7d58f7c0ba231bfc92fba334" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.075337 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65"] Jan 26 14:34:36 crc kubenswrapper[4922]: E0126 14:34:36.076226 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed501c23-2119-4ef8-9d37-776d3a5c5d6e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.076361 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed501c23-2119-4ef8-9d37-776d3a5c5d6e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.076884 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed501c23-2119-4ef8-9d37-776d3a5c5d6e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.078088 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.087306 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65"] Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.089088 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.089526 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.089872 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.090051 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.173667 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.173755 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.174053 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fl2\" (UniqueName: \"kubernetes.io/projected/c6728b4b-8be0-4841-bbd4-0832817d537e-kube-api-access-86fl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.174134 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.276173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.276281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.276385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fl2\" (UniqueName: \"kubernetes.io/projected/c6728b4b-8be0-4841-bbd4-0832817d537e-kube-api-access-86fl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.276418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.281960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.282082 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.282156 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.297463 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fl2\" (UniqueName: \"kubernetes.io/projected/c6728b4b-8be0-4841-bbd4-0832817d537e-kube-api-access-86fl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.404576 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:34:36 crc kubenswrapper[4922]: W0126 14:34:36.974899 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6728b4b_8be0_4841_bbd4_0832817d537e.slice/crio-7c1874f5acc153eff0533c980a2e08e21bb367c9f15309557b5f597e53428428 WatchSource:0}: Error finding container 7c1874f5acc153eff0533c980a2e08e21bb367c9f15309557b5f597e53428428: Status 404 returned error can't find the container with id 7c1874f5acc153eff0533c980a2e08e21bb367c9f15309557b5f597e53428428 Jan 26 14:34:36 crc kubenswrapper[4922]: I0126 14:34:36.975752 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65"] Jan 26 14:34:37 crc kubenswrapper[4922]: I0126 14:34:37.010763 4922 generic.go:334] "Generic (PLEG): container finished" podID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerID="6ad6e961c41d7b81a2e1964c67fbd2fa4da314801c3970ea7ea669a8ece34e47" exitCode=0 Jan 26 14:34:37 crc kubenswrapper[4922]: I0126 14:34:37.010811 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerDied","Data":"6ad6e961c41d7b81a2e1964c67fbd2fa4da314801c3970ea7ea669a8ece34e47"} Jan 26 14:34:37 crc kubenswrapper[4922]: I0126 14:34:37.010860 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerStarted","Data":"19538f4d421dc8f14f343cf3aba40bd0cfeb9d0c741d66f12009fa469c37ecfb"} Jan 26 14:34:37 crc kubenswrapper[4922]: I0126 14:34:37.015777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" event={"ID":"c6728b4b-8be0-4841-bbd4-0832817d537e","Type":"ContainerStarted","Data":"7c1874f5acc153eff0533c980a2e08e21bb367c9f15309557b5f597e53428428"} Jan 26 14:34:37 crc kubenswrapper[4922]: I0126 14:34:37.031000 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j64lv" podStartSLOduration=2.492572898 podStartE2EDuration="4.030982767s" podCreationTimestamp="2026-01-26 14:34:33 +0000 UTC" firstStartedPulling="2026-01-26 14:34:34.979344633 +0000 UTC m=+1492.181607415" lastFinishedPulling="2026-01-26 14:34:36.517754522 +0000 UTC m=+1493.720017284" observedRunningTime="2026-01-26 14:34:37.02671884 +0000 UTC m=+1494.228981622" watchObservedRunningTime="2026-01-26 14:34:37.030982767 +0000 UTC m=+1494.233245539" Jan 26 14:34:41 crc kubenswrapper[4922]: I0126 14:34:41.054218 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" event={"ID":"c6728b4b-8be0-4841-bbd4-0832817d537e","Type":"ContainerStarted","Data":"f2bfbb6c8969c709a22a02e1913f65e1e304040718326a5bdea0df237753d919"} Jan 26 14:34:41 crc kubenswrapper[4922]: I0126 14:34:41.086040 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" podStartSLOduration=2.223521864 podStartE2EDuration="5.086023079s" podCreationTimestamp="2026-01-26 14:34:36 +0000 UTC" firstStartedPulling="2026-01-26 14:34:36.977389305 +0000 UTC m=+1494.179652077" lastFinishedPulling="2026-01-26 14:34:39.83989052 +0000 UTC m=+1497.042153292" observedRunningTime="2026-01-26 14:34:41.079740068 +0000 UTC m=+1498.282002860" watchObservedRunningTime="2026-01-26 14:34:41.086023079 +0000 UTC m=+1498.288285851" Jan 26 14:34:41 crc kubenswrapper[4922]: I0126 14:34:41.307176 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:34:41 crc kubenswrapper[4922]: I0126 14:34:41.307531 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:34:43 crc kubenswrapper[4922]: I0126 14:34:43.824425 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:43 crc kubenswrapper[4922]: I0126 14:34:43.825246 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:43 crc kubenswrapper[4922]: I0126 14:34:43.871293 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:44 crc kubenswrapper[4922]: I0126 14:34:44.126080 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:44 crc kubenswrapper[4922]: I0126 14:34:44.173290 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j64lv"] Jan 26 14:34:46 crc kubenswrapper[4922]: I0126 14:34:46.102386 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j64lv" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="registry-server" containerID="cri-o://19538f4d421dc8f14f343cf3aba40bd0cfeb9d0c741d66f12009fa469c37ecfb" gracePeriod=2 Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.115367 4922 generic.go:334] "Generic (PLEG): container finished" podID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerID="19538f4d421dc8f14f343cf3aba40bd0cfeb9d0c741d66f12009fa469c37ecfb" exitCode=0 Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.115402 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerDied","Data":"19538f4d421dc8f14f343cf3aba40bd0cfeb9d0c741d66f12009fa469c37ecfb"} Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.763780 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.821959 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-catalog-content\") pod \"4c2cff58-2576-44b0-aa67-e04dc492d191\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.822101 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-utilities\") pod \"4c2cff58-2576-44b0-aa67-e04dc492d191\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.822323 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vjwf\" (UniqueName: \"kubernetes.io/projected/4c2cff58-2576-44b0-aa67-e04dc492d191-kube-api-access-7vjwf\") pod \"4c2cff58-2576-44b0-aa67-e04dc492d191\" (UID: \"4c2cff58-2576-44b0-aa67-e04dc492d191\") " Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.824937 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-utilities" (OuterVolumeSpecName: "utilities") pod "4c2cff58-2576-44b0-aa67-e04dc492d191" (UID: "4c2cff58-2576-44b0-aa67-e04dc492d191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.840208 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2cff58-2576-44b0-aa67-e04dc492d191-kube-api-access-7vjwf" (OuterVolumeSpecName: "kube-api-access-7vjwf") pod "4c2cff58-2576-44b0-aa67-e04dc492d191" (UID: "4c2cff58-2576-44b0-aa67-e04dc492d191"). InnerVolumeSpecName "kube-api-access-7vjwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.845586 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c2cff58-2576-44b0-aa67-e04dc492d191" (UID: "4c2cff58-2576-44b0-aa67-e04dc492d191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.924706 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.924742 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vjwf\" (UniqueName: \"kubernetes.io/projected/4c2cff58-2576-44b0-aa67-e04dc492d191-kube-api-access-7vjwf\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:47 crc kubenswrapper[4922]: I0126 14:34:47.924755 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c2cff58-2576-44b0-aa67-e04dc492d191-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.128912 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j64lv" event={"ID":"4c2cff58-2576-44b0-aa67-e04dc492d191","Type":"ContainerDied","Data":"e297642e09502c19a5331c5f61d2382b2a477a1b4ec039a7b16a58a4ac3c25d4"} Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.129932 4922 scope.go:117] "RemoveContainer" containerID="19538f4d421dc8f14f343cf3aba40bd0cfeb9d0c741d66f12009fa469c37ecfb" Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.129073 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j64lv" Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.162818 4922 scope.go:117] "RemoveContainer" containerID="6ad6e961c41d7b81a2e1964c67fbd2fa4da314801c3970ea7ea669a8ece34e47" Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.172049 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j64lv"] Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.191846 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j64lv"] Jan 26 14:34:48 crc kubenswrapper[4922]: I0126 14:34:48.201587 4922 scope.go:117] "RemoveContainer" containerID="57db4cf20cd11c6926e5655811790eabf7a30849755847713c75f25ec1494bbc" Jan 26 14:34:49 crc kubenswrapper[4922]: I0126 14:34:49.102602 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" path="/var/lib/kubelet/pods/4c2cff58-2576-44b0-aa67-e04dc492d191/volumes" Jan 26 14:35:11 crc kubenswrapper[4922]: I0126 14:35:11.306908 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:35:11 crc kubenswrapper[4922]: I0126 14:35:11.307655 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:35:13 crc kubenswrapper[4922]: I0126 14:35:13.787877 4922 scope.go:117] "RemoveContainer" containerID="8d71bb4928fa54b6f37481baf35f506d97bd2e70fd3b905b3f846851e0cd95a0" Jan 26 14:35:13 crc kubenswrapper[4922]: I0126 14:35:13.836810 4922 scope.go:117] "RemoveContainer" containerID="9c27beff9fd9b342fc5d1de5421c2427c05c012c3f67f9619e0f59fef0745832" Jan 26 14:35:13 crc kubenswrapper[4922]: I0126 14:35:13.897754 4922 scope.go:117] "RemoveContainer" containerID="41c2feae30319db74ed0070366aeacba2673e8c7ee8a73bf72f82364ecf842d2" Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.307012 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.307901 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.307985 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.309376 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.309500 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" gracePeriod=600 Jan 26 14:35:41 crc kubenswrapper[4922]: E0126 14:35:41.429209 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.761098 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" exitCode=0 Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.761142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70"} Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.761195 4922 scope.go:117] "RemoveContainer" containerID="0fe8483d01fe17dae14bd575d394e895ec02b281bd5bc48e80a4af9b52b57371" Jan 26 14:35:41 crc kubenswrapper[4922]: I0126 14:35:41.761989 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:35:41 crc kubenswrapper[4922]: E0126 14:35:41.762245 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:35:56 crc kubenswrapper[4922]: I0126 14:35:56.093056 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:35:56 crc kubenswrapper[4922]: E0126 14:35:56.093800 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:36:10 crc kubenswrapper[4922]: I0126 14:36:10.093126 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:36:10 crc kubenswrapper[4922]: E0126 14:36:10.094058 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.160940 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ckcqc"] Jan 26 14:36:13 crc kubenswrapper[4922]: E0126 14:36:13.162780 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="extract-content" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.162856 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="extract-content" Jan 26 14:36:13 crc kubenswrapper[4922]: E0126 14:36:13.162928 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="registry-server" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.162981 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="registry-server" Jan 26 14:36:13 crc kubenswrapper[4922]: E0126 14:36:13.163034 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="extract-utilities" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.163112 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="extract-utilities" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.163338 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2cff58-2576-44b0-aa67-e04dc492d191" containerName="registry-server" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.165107 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.190959 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckcqc"] Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.206735 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-utilities\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.206916 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76slb\" (UniqueName: \"kubernetes.io/projected/d98a0300-efe7-4408-b4e1-114c6b1f6804-kube-api-access-76slb\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.207277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-catalog-content\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.309170 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76slb\" (UniqueName: \"kubernetes.io/projected/d98a0300-efe7-4408-b4e1-114c6b1f6804-kube-api-access-76slb\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.309579 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-catalog-content\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.309740 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-utilities\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.309993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-catalog-content\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.310469 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-utilities\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.332281 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76slb\" (UniqueName: \"kubernetes.io/projected/d98a0300-efe7-4408-b4e1-114c6b1f6804-kube-api-access-76slb\") pod \"community-operators-ckcqc\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:13 crc kubenswrapper[4922]: I0126 14:36:13.509332 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:14 crc kubenswrapper[4922]: I0126 14:36:14.021483 4922 scope.go:117] "RemoveContainer" containerID="9938c5908ce7a4d834bbe2bfdc7dce21cc8c0753522d09cd63b9aba4f67b8534" Jan 26 14:36:14 crc kubenswrapper[4922]: I0126 14:36:14.096698 4922 scope.go:117] "RemoveContainer" containerID="8fe9236217ac07171eb9781ddeab040f93f4cc2d342103141f61213faf13b3f9" Jan 26 14:36:14 crc kubenswrapper[4922]: I0126 14:36:14.103123 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckcqc"] Jan 26 14:36:14 crc kubenswrapper[4922]: I0126 14:36:14.135495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcqc" event={"ID":"d98a0300-efe7-4408-b4e1-114c6b1f6804","Type":"ContainerStarted","Data":"90e170dc9656b7d32316d6dffffae140d03f4ae796eb4361469d093035c17c17"} Jan 26 14:36:15 crc kubenswrapper[4922]: I0126 14:36:15.151528 4922 generic.go:334] "Generic (PLEG): container finished" podID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerID="ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c" exitCode=0 Jan 26 14:36:15 crc kubenswrapper[4922]: I0126 14:36:15.151622 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcqc" event={"ID":"d98a0300-efe7-4408-b4e1-114c6b1f6804","Type":"ContainerDied","Data":"ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c"} Jan 26 14:36:15 crc kubenswrapper[4922]: I0126 14:36:15.155618 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:36:17 crc kubenswrapper[4922]: I0126 14:36:17.186587 4922 generic.go:334] "Generic (PLEG): container finished" podID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerID="574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee" exitCode=0 Jan 26 14:36:17 crc kubenswrapper[4922]: I0126 14:36:17.186693 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcqc" event={"ID":"d98a0300-efe7-4408-b4e1-114c6b1f6804","Type":"ContainerDied","Data":"574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee"} Jan 26 14:36:18 crc kubenswrapper[4922]: I0126 14:36:18.569428 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcqc" event={"ID":"d98a0300-efe7-4408-b4e1-114c6b1f6804","Type":"ContainerStarted","Data":"530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93"} Jan 26 14:36:18 crc kubenswrapper[4922]: I0126 14:36:18.594210 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ckcqc" podStartSLOduration=3.154091639 podStartE2EDuration="5.594186806s" podCreationTimestamp="2026-01-26 14:36:13 +0000 UTC" firstStartedPulling="2026-01-26 14:36:15.154662416 +0000 UTC m=+1592.356925228" lastFinishedPulling="2026-01-26 14:36:17.594757583 +0000 UTC m=+1594.797020395" observedRunningTime="2026-01-26 14:36:18.592081908 +0000 UTC m=+1595.794344680" watchObservedRunningTime="2026-01-26 14:36:18.594186806 +0000 UTC m=+1595.796449608" Jan 26 14:36:23 crc kubenswrapper[4922]: I0126 14:36:23.509629 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:23 crc kubenswrapper[4922]: I0126 14:36:23.510150 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:23 crc kubenswrapper[4922]: I0126 14:36:23.577479 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:23 crc kubenswrapper[4922]: I0126 14:36:23.725533 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:23 crc kubenswrapper[4922]: I0126 14:36:23.823466 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckcqc"] Jan 26 14:36:25 crc kubenswrapper[4922]: I0126 14:36:25.092967 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:36:25 crc kubenswrapper[4922]: E0126 14:36:25.094003 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:36:25 crc kubenswrapper[4922]: I0126 14:36:25.668711 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ckcqc" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="registry-server" containerID="cri-o://530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93" gracePeriod=2 Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.165344 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.202069 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-utilities\") pod \"d98a0300-efe7-4408-b4e1-114c6b1f6804\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.202180 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76slb\" (UniqueName: \"kubernetes.io/projected/d98a0300-efe7-4408-b4e1-114c6b1f6804-kube-api-access-76slb\") pod \"d98a0300-efe7-4408-b4e1-114c6b1f6804\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.202238 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-catalog-content\") pod \"d98a0300-efe7-4408-b4e1-114c6b1f6804\" (UID: \"d98a0300-efe7-4408-b4e1-114c6b1f6804\") " Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.203128 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-utilities" (OuterVolumeSpecName: "utilities") pod "d98a0300-efe7-4408-b4e1-114c6b1f6804" (UID: "d98a0300-efe7-4408-b4e1-114c6b1f6804"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.218407 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98a0300-efe7-4408-b4e1-114c6b1f6804-kube-api-access-76slb" (OuterVolumeSpecName: "kube-api-access-76slb") pod "d98a0300-efe7-4408-b4e1-114c6b1f6804" (UID: "d98a0300-efe7-4408-b4e1-114c6b1f6804"). InnerVolumeSpecName "kube-api-access-76slb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.261525 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d98a0300-efe7-4408-b4e1-114c6b1f6804" (UID: "d98a0300-efe7-4408-b4e1-114c6b1f6804"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.304862 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.304898 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76slb\" (UniqueName: \"kubernetes.io/projected/d98a0300-efe7-4408-b4e1-114c6b1f6804-kube-api-access-76slb\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.304909 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98a0300-efe7-4408-b4e1-114c6b1f6804-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.678749 4922 generic.go:334] "Generic (PLEG): container finished" podID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerID="530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93" exitCode=0 Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.678798 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcqc" event={"ID":"d98a0300-efe7-4408-b4e1-114c6b1f6804","Type":"ContainerDied","Data":"530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93"} Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.678822 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcqc" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.678837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcqc" event={"ID":"d98a0300-efe7-4408-b4e1-114c6b1f6804","Type":"ContainerDied","Data":"90e170dc9656b7d32316d6dffffae140d03f4ae796eb4361469d093035c17c17"} Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.678857 4922 scope.go:117] "RemoveContainer" containerID="530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.720535 4922 scope.go:117] "RemoveContainer" containerID="574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.738425 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckcqc"] Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.753828 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ckcqc"] Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.754285 4922 scope.go:117] "RemoveContainer" containerID="ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.812656 4922 scope.go:117] "RemoveContainer" containerID="530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93" Jan 26 14:36:26 crc kubenswrapper[4922]: E0126 14:36:26.813354 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93\": container with ID starting with 530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93 not found: ID does not exist" containerID="530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.813412 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93"} err="failed to get container status \"530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93\": rpc error: code = NotFound desc = could not find container \"530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93\": container with ID starting with 530f98fb44796bc5a633a6a7fb428f84d533f2a2c12cd6c54a1a554cc90d1e93 not found: ID does not exist" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.813453 4922 scope.go:117] "RemoveContainer" containerID="574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee" Jan 26 14:36:26 crc kubenswrapper[4922]: E0126 14:36:26.813905 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee\": container with ID starting with 574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee not found: ID does not exist" containerID="574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.813938 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee"} err="failed to get container status \"574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee\": rpc error: code = NotFound desc = could not find container \"574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee\": container with ID starting with 574b26054b614308bbe25f5f3630407694371e8b4f2fe97d9f5f731081852dee not found: ID does not exist" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.813963 4922 scope.go:117] "RemoveContainer" containerID="ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c" Jan 26 14:36:26 crc kubenswrapper[4922]: E0126 14:36:26.814313 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c\": container with ID starting with ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c not found: ID does not exist" containerID="ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c" Jan 26 14:36:26 crc kubenswrapper[4922]: I0126 14:36:26.814358 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c"} err="failed to get container status \"ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c\": rpc error: code = NotFound desc = could not find container \"ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c\": container with ID starting with ad7a79420d30aba478812b44b7594aefbaa728e93b18284a848086e8830c3e6c not found: ID does not exist" Jan 26 14:36:27 crc kubenswrapper[4922]: I0126 14:36:27.105649 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" path="/var/lib/kubelet/pods/d98a0300-efe7-4408-b4e1-114c6b1f6804/volumes" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.445819 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4t97"] Jan 26 14:36:38 crc kubenswrapper[4922]: E0126 14:36:38.447080 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="registry-server" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.447098 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="registry-server" Jan 26 14:36:38 crc kubenswrapper[4922]: E0126 14:36:38.447129 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="extract-utilities" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.447139 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="extract-utilities" Jan 26 14:36:38 crc kubenswrapper[4922]: E0126 14:36:38.447175 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="extract-content" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.447184 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="extract-content" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.447450 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98a0300-efe7-4408-b4e1-114c6b1f6804" containerName="registry-server" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.449330 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.461572 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4t97"] Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.475904 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-utilities\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.476167 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdrb\" (UniqueName: \"kubernetes.io/projected/42884ce0-499b-400a-a49b-83dc05a83491-kube-api-access-xvdrb\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.476215 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-catalog-content\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.579354 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvdrb\" (UniqueName: \"kubernetes.io/projected/42884ce0-499b-400a-a49b-83dc05a83491-kube-api-access-xvdrb\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.579526 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-catalog-content\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.579773 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-utilities\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.580190 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-catalog-content\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.580360 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-utilities\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.608483 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvdrb\" (UniqueName: \"kubernetes.io/projected/42884ce0-499b-400a-a49b-83dc05a83491-kube-api-access-xvdrb\") pod \"certified-operators-g4t97\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:38 crc kubenswrapper[4922]: I0126 14:36:38.817630 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:39 crc kubenswrapper[4922]: I0126 14:36:39.328666 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4t97"] Jan 26 14:36:39 crc kubenswrapper[4922]: I0126 14:36:39.821563 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerStarted","Data":"b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666"} Jan 26 14:36:39 crc kubenswrapper[4922]: I0126 14:36:39.821911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerStarted","Data":"339af8d5a8383aa962f41a753ab70c7b0650d3aa481fcabaac9ef5c24e9e0956"} Jan 26 14:36:40 crc kubenswrapper[4922]: I0126 14:36:40.092995 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:36:40 crc kubenswrapper[4922]: E0126 14:36:40.093477 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:36:40 crc kubenswrapper[4922]: I0126 14:36:40.839264 4922 generic.go:334] "Generic (PLEG): container finished" podID="42884ce0-499b-400a-a49b-83dc05a83491" containerID="b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666" exitCode=0 Jan 26 14:36:40 crc kubenswrapper[4922]: I0126 14:36:40.839445 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerDied","Data":"b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666"} Jan 26 14:36:43 crc kubenswrapper[4922]: I0126 14:36:43.874688 4922 generic.go:334] "Generic (PLEG): container finished" podID="42884ce0-499b-400a-a49b-83dc05a83491" containerID="2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3" exitCode=0 Jan 26 14:36:43 crc kubenswrapper[4922]: I0126 14:36:43.874761 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerDied","Data":"2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3"} Jan 26 14:36:44 crc kubenswrapper[4922]: I0126 14:36:44.886950 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerStarted","Data":"62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118"} Jan 26 14:36:44 crc kubenswrapper[4922]: I0126 14:36:44.911578 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4t97" podStartSLOduration=3.497154689 podStartE2EDuration="6.91155936s" podCreationTimestamp="2026-01-26 14:36:38 +0000 UTC" firstStartedPulling="2026-01-26 14:36:40.842495214 +0000 UTC m=+1618.044758026" lastFinishedPulling="2026-01-26 14:36:44.256899925 +0000 UTC m=+1621.459162697" observedRunningTime="2026-01-26 14:36:44.90644063 +0000 UTC m=+1622.108703402" watchObservedRunningTime="2026-01-26 14:36:44.91155936 +0000 UTC m=+1622.113822132" Jan 26 14:36:48 crc kubenswrapper[4922]: I0126 14:36:48.818591 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:48 crc kubenswrapper[4922]: I0126 14:36:48.819808 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:48 crc kubenswrapper[4922]: I0126 14:36:48.877958 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:51 crc kubenswrapper[4922]: I0126 14:36:51.093523 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:36:51 crc kubenswrapper[4922]: E0126 14:36:51.093950 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:36:58 crc kubenswrapper[4922]: I0126 14:36:58.894858 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:58 crc kubenswrapper[4922]: I0126 14:36:58.967203 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4t97"] Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.041864 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4t97" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="registry-server" containerID="cri-o://62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118" gracePeriod=2 Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.517270 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.579159 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-catalog-content\") pod \"42884ce0-499b-400a-a49b-83dc05a83491\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.579346 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvdrb\" (UniqueName: \"kubernetes.io/projected/42884ce0-499b-400a-a49b-83dc05a83491-kube-api-access-xvdrb\") pod \"42884ce0-499b-400a-a49b-83dc05a83491\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.579390 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-utilities\") pod \"42884ce0-499b-400a-a49b-83dc05a83491\" (UID: \"42884ce0-499b-400a-a49b-83dc05a83491\") " Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.582232 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-utilities" (OuterVolumeSpecName: "utilities") pod "42884ce0-499b-400a-a49b-83dc05a83491" (UID: "42884ce0-499b-400a-a49b-83dc05a83491"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.586195 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42884ce0-499b-400a-a49b-83dc05a83491-kube-api-access-xvdrb" (OuterVolumeSpecName: "kube-api-access-xvdrb") pod "42884ce0-499b-400a-a49b-83dc05a83491" (UID: "42884ce0-499b-400a-a49b-83dc05a83491"). InnerVolumeSpecName "kube-api-access-xvdrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.622859 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42884ce0-499b-400a-a49b-83dc05a83491" (UID: "42884ce0-499b-400a-a49b-83dc05a83491"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.682257 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvdrb\" (UniqueName: \"kubernetes.io/projected/42884ce0-499b-400a-a49b-83dc05a83491-kube-api-access-xvdrb\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.682321 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:36:59 crc kubenswrapper[4922]: I0126 14:36:59.682345 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42884ce0-499b-400a-a49b-83dc05a83491-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.058697 4922 generic.go:334] "Generic (PLEG): container finished" podID="42884ce0-499b-400a-a49b-83dc05a83491" containerID="62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118" exitCode=0 Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.058743 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerDied","Data":"62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118"} Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.058771 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4t97" event={"ID":"42884ce0-499b-400a-a49b-83dc05a83491","Type":"ContainerDied","Data":"339af8d5a8383aa962f41a753ab70c7b0650d3aa481fcabaac9ef5c24e9e0956"} Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.058793 4922 scope.go:117] "RemoveContainer" containerID="62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.058812 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4t97" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.093017 4922 scope.go:117] "RemoveContainer" containerID="2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.133168 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4t97"] Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.145448 4922 scope.go:117] "RemoveContainer" containerID="b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.146106 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4t97"] Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.186208 4922 scope.go:117] "RemoveContainer" containerID="62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118" Jan 26 14:37:00 crc kubenswrapper[4922]: E0126 14:37:00.186742 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118\": container with ID starting with 62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118 not found: ID does not exist" containerID="62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.186771 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118"} err="failed to get container status \"62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118\": rpc error: code = NotFound desc = could not find container \"62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118\": container with ID starting with 62f4e27ae5b6fc4a554deb19b3744c8eb82db394d33fc29ee143e632c3056118 not found: ID does not exist" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.186789 4922 scope.go:117] "RemoveContainer" containerID="2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3" Jan 26 14:37:00 crc kubenswrapper[4922]: E0126 14:37:00.187241 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3\": container with ID starting with 2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3 not found: ID does not exist" containerID="2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.187261 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3"} err="failed to get container status \"2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3\": rpc error: code = NotFound desc = could not find container \"2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3\": container with ID starting with 2332d15ee05b261dfad61725c0198842284186566ec0251cf543ef1e81a173f3 not found: ID does not exist" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.187272 4922 scope.go:117] "RemoveContainer" containerID="b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666" Jan 26 14:37:00 crc kubenswrapper[4922]: E0126 14:37:00.187678 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666\": container with ID starting with b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666 not found: ID does not exist" containerID="b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666" Jan 26 14:37:00 crc kubenswrapper[4922]: I0126 14:37:00.187695 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666"} err="failed to get container status \"b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666\": rpc error: code = NotFound desc = could not find container \"b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666\": container with ID starting with b3edd5f9786dd97c22a661cfdee82ef8938542201463088f4fe692032308a666 not found: ID does not exist" Jan 26 14:37:01 crc kubenswrapper[4922]: I0126 14:37:01.110808 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42884ce0-499b-400a-a49b-83dc05a83491" path="/var/lib/kubelet/pods/42884ce0-499b-400a-a49b-83dc05a83491/volumes" Jan 26 14:37:06 crc kubenswrapper[4922]: I0126 14:37:06.092849 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:37:06 crc kubenswrapper[4922]: E0126 14:37:06.094019 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:37:20 crc kubenswrapper[4922]: I0126 14:37:20.092888 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:37:20 crc kubenswrapper[4922]: E0126 14:37:20.093897 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:37:35 crc kubenswrapper[4922]: I0126 14:37:35.093535 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:37:35 crc kubenswrapper[4922]: E0126 14:37:35.094736 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.053879 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6410-account-create-update-5f45b"] Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.065900 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rcb2t"] Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.076852 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ba35-account-create-update-p8kwd"] Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.089719 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ba35-account-create-update-p8kwd"] Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.111499 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8907e8-ff60-47a7-a7da-cce27fd8ede1" path="/var/lib/kubelet/pods/de8907e8-ff60-47a7-a7da-cce27fd8ede1/volumes" Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.112606 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rcb2t"] Jan 26 14:37:41 crc kubenswrapper[4922]: I0126 14:37:41.112653 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6410-account-create-update-5f45b"] Jan 26 14:37:42 crc kubenswrapper[4922]: I0126 14:37:42.035240 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s566l"] Jan 26 14:37:42 crc kubenswrapper[4922]: I0126 14:37:42.059995 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s566l"] Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.048591 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-02c3-account-create-update-7ss8l"] Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.066600 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-mlnbh"] Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.080861 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-02c3-account-create-update-7ss8l"] Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.091311 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-mlnbh"] Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.111789 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d62f1e-149e-4aa1-b3d3-54cdcb1a2275" path="/var/lib/kubelet/pods/02d62f1e-149e-4aa1-b3d3-54cdcb1a2275/volumes" Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.113648 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22235e5e-84ab-4632-98fc-dae804d6e4a4" path="/var/lib/kubelet/pods/22235e5e-84ab-4632-98fc-dae804d6e4a4/volumes" Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.114821 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560e44e8-1468-48d7-90b7-d205bdb05f9d" path="/var/lib/kubelet/pods/560e44e8-1468-48d7-90b7-d205bdb05f9d/volumes" Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.116025 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b76d715-9901-4789-971e-8ba3bd1be5a9" path="/var/lib/kubelet/pods/7b76d715-9901-4789-971e-8ba3bd1be5a9/volumes" Jan 26 14:37:43 crc kubenswrapper[4922]: I0126 14:37:43.119027 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48da751-d5f8-4ef5-b2a0-33864b35ba6c" path="/var/lib/kubelet/pods/d48da751-d5f8-4ef5-b2a0-33864b35ba6c/volumes" Jan 26 14:37:46 crc kubenswrapper[4922]: I0126 14:37:46.093565 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:37:46 crc kubenswrapper[4922]: E0126 14:37:46.094616 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:37:55 crc kubenswrapper[4922]: I0126 14:37:55.071250 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hh472"] Jan 26 14:37:55 crc kubenswrapper[4922]: I0126 14:37:55.085823 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hh472"] Jan 26 14:37:55 crc kubenswrapper[4922]: I0126 14:37:55.113373 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0671d2d0-0598-41da-bfaa-a46b7b3a0bf2" path="/var/lib/kubelet/pods/0671d2d0-0598-41da-bfaa-a46b7b3a0bf2/volumes" Jan 26 14:38:01 crc kubenswrapper[4922]: I0126 14:38:01.093404 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:38:01 crc kubenswrapper[4922]: E0126 14:38:01.094518 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.037632 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hhqtm"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.052246 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d418-account-create-update-v6qdg"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.068141 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8nvcl"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.077248 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zzv6s"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.085263 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hhqtm"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.094123 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8nvcl"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.103107 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d418-account-create-update-v6qdg"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.113137 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zzv6s"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.124947 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1b47-account-create-update-tb2wz"] Jan 26 14:38:10 crc kubenswrapper[4922]: I0126 14:38:10.136632 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1b47-account-create-update-tb2wz"] Jan 26 14:38:11 crc kubenswrapper[4922]: I0126 14:38:11.107950 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1526bd7f-501e-4add-b2c3-e1f6f803d3cf" path="/var/lib/kubelet/pods/1526bd7f-501e-4add-b2c3-e1f6f803d3cf/volumes" Jan 26 14:38:11 crc kubenswrapper[4922]: I0126 14:38:11.109321 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c573fb3-9fba-47ec-8951-b069561ed90e" path="/var/lib/kubelet/pods/4c573fb3-9fba-47ec-8951-b069561ed90e/volumes" Jan 26 14:38:11 crc kubenswrapper[4922]: I0126 14:38:11.110569 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5acb458f-2080-4c36-86cd-d0e8004b9f9d" path="/var/lib/kubelet/pods/5acb458f-2080-4c36-86cd-d0e8004b9f9d/volumes" Jan 26 14:38:11 crc kubenswrapper[4922]: I0126 14:38:11.111967 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa199068-fd5f-415d-82bd-32fd3b23a926" path="/var/lib/kubelet/pods/aa199068-fd5f-415d-82bd-32fd3b23a926/volumes" Jan 26 14:38:11 crc kubenswrapper[4922]: I0126 14:38:11.114605 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5539a2-9e43-4f3b-8dbf-14d091e7b37d" path="/var/lib/kubelet/pods/ba5539a2-9e43-4f3b-8dbf-14d091e7b37d/volumes" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.049871 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0ea3-account-create-update-6vlln"] Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.060997 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-643c-account-create-update-v5bs8"] Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.073314 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x7466"] Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.083893 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0ea3-account-create-update-6vlln"] Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.093008 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:38:14 crc kubenswrapper[4922]: E0126 14:38:14.093619 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.095121 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-643c-account-create-update-v5bs8"] Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.103344 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x7466"] Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.263488 4922 scope.go:117] "RemoveContainer" containerID="b39ee1e538e33aed07488b710fbb6c6b9358a3025f2e53a35c623842d62e0b25" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.290406 4922 scope.go:117] "RemoveContainer" containerID="306f6e7a38c62f6fc6ed0b9f0d6b1a1e49320d852ec0070152f4991feee54f36" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.348957 4922 scope.go:117] "RemoveContainer" containerID="216636d782b5a086696b0ca146e89125c4544c9143f66567db02cc32f0b7950e" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.388029 4922 scope.go:117] "RemoveContainer" containerID="9cf8cdb005164cbd233fd5bd637e836d1553871dd6e551dee01b7451ae4955dc" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.412357 4922 scope.go:117] "RemoveContainer" containerID="2ebd0308850195c47ef3e9b1aa49bc8b04a9b447707a5c28538c6dd1dc288226" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.452743 4922 scope.go:117] "RemoveContainer" containerID="a9f49dc17f42b4766399126cc7a3f74b9176b4189aa07abda9a8385c7293685e" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.498404 4922 scope.go:117] "RemoveContainer" containerID="27c8ed24f5147f682f0bc61427c34ae0325afdd3287d91bb80d9d840315cd574" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.539716 4922 scope.go:117] "RemoveContainer" containerID="bbe497081ab22b87d3a07a74cf62dc7806bcda9ef4bcdfac0c9de2046958fc6a" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.562031 4922 scope.go:117] "RemoveContainer" containerID="9b50ed01acbfecf801c8b71ee3866a707395362dd4aabc7d480babb6bec82a62" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.595135 4922 scope.go:117] "RemoveContainer" containerID="b88d0894b35842223f2f2f9e120d9d8d9165a952d40e46caf68d4112267be969" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.626990 4922 scope.go:117] "RemoveContainer" containerID="2f909b6a53ceeb784cc00754f15f7b7c0532bb6deb06311a49088e9d49f9e18a" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.652793 4922 scope.go:117] "RemoveContainer" containerID="aa21a52dd279c7ffddd6539a48452941eaf397e8108327321383a9116c469e8f" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.672391 4922 scope.go:117] "RemoveContainer" containerID="2eaccac50b73a3d2c5ffce13fe7b7a4d6f075b40b091362a3b3a9c98c8c83bbc" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.696975 4922 scope.go:117] "RemoveContainer" containerID="12cc4b04e483e50921a5f7033ba49c4751f5bc1540ee817537e89456d5a5037e" Jan 26 14:38:14 crc kubenswrapper[4922]: I0126 14:38:14.718971 4922 scope.go:117] "RemoveContainer" containerID="8cebcb680ace3e173974238d45f9aab1c4c268999f4b8bc267ed9d1947c553ab" Jan 26 14:38:15 crc kubenswrapper[4922]: I0126 14:38:15.107975 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21553358-b59a-4191-8376-e66491f5eadf" path="/var/lib/kubelet/pods/21553358-b59a-4191-8376-e66491f5eadf/volumes" Jan 26 14:38:15 crc kubenswrapper[4922]: I0126 14:38:15.109407 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19ecea1-4e7a-4bd9-9a7f-dca95afbe705" path="/var/lib/kubelet/pods/d19ecea1-4e7a-4bd9-9a7f-dca95afbe705/volumes" Jan 26 14:38:15 crc kubenswrapper[4922]: I0126 14:38:15.111333 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b6e159-08af-466b-ac1f-0faab319b9f5" path="/var/lib/kubelet/pods/e7b6e159-08af-466b-ac1f-0faab319b9f5/volumes" Jan 26 14:38:20 crc kubenswrapper[4922]: I0126 14:38:20.004853 4922 generic.go:334] "Generic (PLEG): container finished" podID="c6728b4b-8be0-4841-bbd4-0832817d537e" containerID="f2bfbb6c8969c709a22a02e1913f65e1e304040718326a5bdea0df237753d919" exitCode=0 Jan 26 14:38:20 crc kubenswrapper[4922]: I0126 14:38:20.004959 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" event={"ID":"c6728b4b-8be0-4841-bbd4-0832817d537e","Type":"ContainerDied","Data":"f2bfbb6c8969c709a22a02e1913f65e1e304040718326a5bdea0df237753d919"} Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.454698 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.561303 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86fl2\" (UniqueName: \"kubernetes.io/projected/c6728b4b-8be0-4841-bbd4-0832817d537e-kube-api-access-86fl2\") pod \"c6728b4b-8be0-4841-bbd4-0832817d537e\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.561570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-inventory\") pod \"c6728b4b-8be0-4841-bbd4-0832817d537e\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.561623 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-bootstrap-combined-ca-bundle\") pod \"c6728b4b-8be0-4841-bbd4-0832817d537e\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.561751 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-ssh-key-openstack-edpm-ipam\") pod \"c6728b4b-8be0-4841-bbd4-0832817d537e\" (UID: \"c6728b4b-8be0-4841-bbd4-0832817d537e\") " Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.566734 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c6728b4b-8be0-4841-bbd4-0832817d537e" (UID: "c6728b4b-8be0-4841-bbd4-0832817d537e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.573791 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6728b4b-8be0-4841-bbd4-0832817d537e-kube-api-access-86fl2" (OuterVolumeSpecName: "kube-api-access-86fl2") pod "c6728b4b-8be0-4841-bbd4-0832817d537e" (UID: "c6728b4b-8be0-4841-bbd4-0832817d537e"). InnerVolumeSpecName "kube-api-access-86fl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.595597 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-inventory" (OuterVolumeSpecName: "inventory") pod "c6728b4b-8be0-4841-bbd4-0832817d537e" (UID: "c6728b4b-8be0-4841-bbd4-0832817d537e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.613509 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6728b4b-8be0-4841-bbd4-0832817d537e" (UID: "c6728b4b-8be0-4841-bbd4-0832817d537e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.670745 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.670799 4922 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.670821 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6728b4b-8be0-4841-bbd4-0832817d537e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:21 crc kubenswrapper[4922]: I0126 14:38:21.670842 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86fl2\" (UniqueName: \"kubernetes.io/projected/c6728b4b-8be0-4841-bbd4-0832817d537e-kube-api-access-86fl2\") on node \"crc\" DevicePath \"\"" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.033576 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" event={"ID":"c6728b4b-8be0-4841-bbd4-0832817d537e","Type":"ContainerDied","Data":"7c1874f5acc153eff0533c980a2e08e21bb367c9f15309557b5f597e53428428"} Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.033616 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c1874f5acc153eff0533c980a2e08e21bb367c9f15309557b5f597e53428428" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.033642 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.125889 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99"] Jan 26 14:38:22 crc kubenswrapper[4922]: E0126 14:38:22.126332 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="extract-utilities" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.126350 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="extract-utilities" Jan 26 14:38:22 crc kubenswrapper[4922]: E0126 14:38:22.126370 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="registry-server" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.126377 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="registry-server" Jan 26 14:38:22 crc kubenswrapper[4922]: E0126 14:38:22.126389 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6728b4b-8be0-4841-bbd4-0832817d537e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.126396 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6728b4b-8be0-4841-bbd4-0832817d537e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 14:38:22 crc kubenswrapper[4922]: E0126 14:38:22.126414 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="extract-content" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.126421 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="extract-content" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.126610 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="42884ce0-499b-400a-a49b-83dc05a83491" containerName="registry-server" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.126633 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6728b4b-8be0-4841-bbd4-0832817d537e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.127341 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.135659 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.135696 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.135697 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.135828 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.148140 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99"] Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.286944 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkgc\" (UniqueName: \"kubernetes.io/projected/e8749ec8-770d-498f-9ace-ad44e3385a36-kube-api-access-rdkgc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.287161 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.287229 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.389540 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.389617 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.389727 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkgc\" (UniqueName: \"kubernetes.io/projected/e8749ec8-770d-498f-9ace-ad44e3385a36-kube-api-access-rdkgc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.396967 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.398578 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.411867 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkgc\" (UniqueName: \"kubernetes.io/projected/e8749ec8-770d-498f-9ace-ad44e3385a36-kube-api-access-rdkgc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gcv99\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:22 crc kubenswrapper[4922]: I0126 14:38:22.449575 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:38:23 crc kubenswrapper[4922]: I0126 14:38:23.022962 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99"] Jan 26 14:38:23 crc kubenswrapper[4922]: I0126 14:38:23.045500 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" event={"ID":"e8749ec8-770d-498f-9ace-ad44e3385a36","Type":"ContainerStarted","Data":"59a76d3c8b00b56ba0b29b868bef24b9cfb1f11e82f5ef63d6973b12c7d7bd63"} Jan 26 14:38:25 crc kubenswrapper[4922]: I0126 14:38:25.069876 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" event={"ID":"e8749ec8-770d-498f-9ace-ad44e3385a36","Type":"ContainerStarted","Data":"6f6391d56023cb362843a48a889917496f828911353aef8e7a27d093f31b0b3a"} Jan 26 14:38:25 crc kubenswrapper[4922]: I0126 14:38:25.098460 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" podStartSLOduration=2.286759043 podStartE2EDuration="3.098444143s" podCreationTimestamp="2026-01-26 14:38:22 +0000 UTC" firstStartedPulling="2026-01-26 14:38:23.02898685 +0000 UTC m=+1720.231249662" lastFinishedPulling="2026-01-26 14:38:23.84067199 +0000 UTC m=+1721.042934762" observedRunningTime="2026-01-26 14:38:25.094352232 +0000 UTC m=+1722.296615004" watchObservedRunningTime="2026-01-26 14:38:25.098444143 +0000 UTC m=+1722.300706915" Jan 26 14:38:27 crc kubenswrapper[4922]: I0126 14:38:27.092777 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:38:27 crc kubenswrapper[4922]: E0126 14:38:27.093808 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:38:36 crc kubenswrapper[4922]: I0126 14:38:36.040985 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-6mb2d"] Jan 26 14:38:36 crc kubenswrapper[4922]: I0126 14:38:36.050620 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-l2gr2"] Jan 26 14:38:36 crc kubenswrapper[4922]: I0126 14:38:36.058961 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-6mb2d"] Jan 26 14:38:36 crc kubenswrapper[4922]: I0126 14:38:36.066730 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-l2gr2"] Jan 26 14:38:37 crc kubenswrapper[4922]: I0126 14:38:37.105598 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5060ea35-5cb6-4f74-8f86-ec622a9c83d4" path="/var/lib/kubelet/pods/5060ea35-5cb6-4f74-8f86-ec622a9c83d4/volumes" Jan 26 14:38:37 crc kubenswrapper[4922]: I0126 14:38:37.107802 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb180569-7ff8-4908-8bdd-66b681f030df" path="/var/lib/kubelet/pods/eb180569-7ff8-4908-8bdd-66b681f030df/volumes" Jan 26 14:38:38 crc kubenswrapper[4922]: I0126 14:38:38.092812 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:38:38 crc kubenswrapper[4922]: E0126 14:38:38.093319 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:38:51 crc kubenswrapper[4922]: I0126 14:38:51.092442 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:38:51 crc kubenswrapper[4922]: E0126 14:38:51.093239 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:39:05 crc kubenswrapper[4922]: I0126 14:39:05.093291 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:39:05 crc kubenswrapper[4922]: E0126 14:39:05.094123 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:39:15 crc kubenswrapper[4922]: I0126 14:39:15.042935 4922 scope.go:117] "RemoveContainer" containerID="5b1f6bc4e5fbf6288fefd5266a9f1b259d9a46b28a4e3dcc8214f6ecadcecae4" Jan 26 14:39:15 crc kubenswrapper[4922]: I0126 14:39:15.070973 4922 scope.go:117] "RemoveContainer" containerID="3f4700bbe7f36afeab23ccec4d54884125080ed72ca9814046aae91364cd7396" Jan 26 14:39:15 crc kubenswrapper[4922]: I0126 14:39:15.169501 4922 scope.go:117] "RemoveContainer" containerID="2941f915fa06b7f4a3aff91d956d8dcd088c1692803c8a0fa9f65f5dccbde398" Jan 26 14:39:15 crc kubenswrapper[4922]: I0126 14:39:15.213907 4922 scope.go:117] "RemoveContainer" containerID="f723fec3ba6589a0da3c13a86dcf560e9ee07683222299d3f30523d96eff043f" Jan 26 14:39:15 crc kubenswrapper[4922]: I0126 14:39:15.259294 4922 scope.go:117] "RemoveContainer" containerID="2185c1ed9b133570c591250d5d2dc2793289d160b542a91be580eb0508dc5e50" Jan 26 14:39:17 crc kubenswrapper[4922]: I0126 14:39:17.094004 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:39:17 crc kubenswrapper[4922]: E0126 14:39:17.094731 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:39:32 crc kubenswrapper[4922]: I0126 14:39:32.064526 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xdqj7"] Jan 26 14:39:32 crc kubenswrapper[4922]: I0126 14:39:32.079264 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kdsxs"] Jan 26 14:39:32 crc kubenswrapper[4922]: I0126 14:39:32.092201 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xdqj7"] Jan 26 14:39:32 crc kubenswrapper[4922]: I0126 14:39:32.092684 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:39:32 crc kubenswrapper[4922]: E0126 14:39:32.093366 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:39:32 crc kubenswrapper[4922]: I0126 14:39:32.101981 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kdsxs"] Jan 26 14:39:33 crc kubenswrapper[4922]: I0126 14:39:33.110198 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079ac494-9665-4c61-9ec5-47628d00d8bc" path="/var/lib/kubelet/pods/079ac494-9665-4c61-9ec5-47628d00d8bc/volumes" Jan 26 14:39:33 crc kubenswrapper[4922]: I0126 14:39:33.111349 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9" path="/var/lib/kubelet/pods/55cac8fc-fbd5-405b-9ca0-48dbcb7b3eb9/volumes" Jan 26 14:39:37 crc kubenswrapper[4922]: I0126 14:39:37.048410 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nzdsh"] Jan 26 14:39:37 crc kubenswrapper[4922]: I0126 14:39:37.058418 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nzdsh"] Jan 26 14:39:37 crc kubenswrapper[4922]: I0126 14:39:37.108606 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64dc8567-a56e-4cf4-8155-5b06c405f7ba" path="/var/lib/kubelet/pods/64dc8567-a56e-4cf4-8155-5b06c405f7ba/volumes" Jan 26 14:39:44 crc kubenswrapper[4922]: I0126 14:39:44.093231 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:39:44 crc kubenswrapper[4922]: E0126 14:39:44.094287 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:39:51 crc kubenswrapper[4922]: I0126 14:39:51.064913 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rvk6w"] Jan 26 14:39:51 crc kubenswrapper[4922]: I0126 14:39:51.074474 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rvk6w"] Jan 26 14:39:51 crc kubenswrapper[4922]: I0126 14:39:51.105002 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91754680-73d8-4c72-a7bd-834959e192a1" path="/var/lib/kubelet/pods/91754680-73d8-4c72-a7bd-834959e192a1/volumes" Jan 26 14:39:54 crc kubenswrapper[4922]: I0126 14:39:54.036936 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-44cnx"] Jan 26 14:39:54 crc kubenswrapper[4922]: I0126 14:39:54.053421 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-44cnx"] Jan 26 14:39:55 crc kubenswrapper[4922]: I0126 14:39:55.104028 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281c4d86-0cfa-4637-9106-2099e20add9a" path="/var/lib/kubelet/pods/281c4d86-0cfa-4637-9106-2099e20add9a/volumes" Jan 26 14:39:58 crc kubenswrapper[4922]: I0126 14:39:58.092540 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:39:58 crc kubenswrapper[4922]: E0126 14:39:58.093476 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:40:00 crc kubenswrapper[4922]: I0126 14:40:00.042023 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-lt6mt"] Jan 26 14:40:00 crc kubenswrapper[4922]: I0126 14:40:00.050265 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-lt6mt"] Jan 26 14:40:01 crc kubenswrapper[4922]: I0126 14:40:01.111571 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c8b640-ac97-4a3e-8e4c-1781bd756396" path="/var/lib/kubelet/pods/99c8b640-ac97-4a3e-8e4c-1781bd756396/volumes" Jan 26 14:40:09 crc kubenswrapper[4922]: I0126 14:40:09.092435 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:40:09 crc kubenswrapper[4922]: E0126 14:40:09.093121 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:40:15 crc kubenswrapper[4922]: I0126 14:40:15.402673 4922 scope.go:117] "RemoveContainer" containerID="16b745693bcab6d0f9232d87e8bb69872326118be3758e747c896eccdb6e0620" Jan 26 14:40:15 crc kubenswrapper[4922]: I0126 14:40:15.465453 4922 scope.go:117] "RemoveContainer" containerID="392dfe958b3ed5aee9c1d4a1b60e37539a9c063ac7641d6f76d3547f769fedc1" Jan 26 14:40:15 crc kubenswrapper[4922]: I0126 14:40:15.532708 4922 scope.go:117] "RemoveContainer" containerID="12325cb746235a5035dcbb0b5e62405626e7dd4ecbfbf27aa59d5e9353bf71a8" Jan 26 14:40:15 crc kubenswrapper[4922]: I0126 14:40:15.571313 4922 scope.go:117] "RemoveContainer" containerID="a04ecedb0937b481c79df65e9f38d469fe0b6a913ee51d8b391e1e0fd2332851" Jan 26 14:40:15 crc kubenswrapper[4922]: I0126 14:40:15.618889 4922 scope.go:117] "RemoveContainer" containerID="fd62fad13f01dcbbf639bc9ff71f574f92522f976a084b2bb925a733ab522f31" Jan 26 14:40:15 crc kubenswrapper[4922]: I0126 14:40:15.661590 4922 scope.go:117] "RemoveContainer" containerID="61ac71e3b146021f3bb778b4b59391f795af61f8ae1751d232d0bac87fb85fdd" Jan 26 14:40:24 crc kubenswrapper[4922]: I0126 14:40:24.092463 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:40:24 crc kubenswrapper[4922]: E0126 14:40:24.093788 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.066653 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9807-account-create-update-gjf78"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.112853 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cv9zm"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.122356 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rh9fp"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.129721 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1213-account-create-update-x4gt5"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.136910 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1cf0-account-create-update-mccq6"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.145867 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cv9zm"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.154685 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1213-account-create-update-x4gt5"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.162431 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9807-account-create-update-gjf78"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.169505 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1cf0-account-create-update-mccq6"] Jan 26 14:40:30 crc kubenswrapper[4922]: I0126 14:40:30.175830 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rh9fp"] Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.040387 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p7wdz"] Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.057397 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p7wdz"] Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.109914 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292489b8-e052-41ab-9648-a2113a58ca1b" path="/var/lib/kubelet/pods/292489b8-e052-41ab-9648-a2113a58ca1b/volumes" Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.111209 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a20bca-2b21-4f25-a853-04533951ab18" path="/var/lib/kubelet/pods/33a20bca-2b21-4f25-a853-04533951ab18/volumes" Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.112773 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee24504-84ec-4bd8-b29e-797fff4db145" path="/var/lib/kubelet/pods/4ee24504-84ec-4bd8-b29e-797fff4db145/volumes" Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.114410 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e52d2e-62ff-40ba-9e10-a970927a8e47" path="/var/lib/kubelet/pods/85e52d2e-62ff-40ba-9e10-a970927a8e47/volumes" Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.116733 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e9f925-c521-47fb-bc44-6243504b38ad" path="/var/lib/kubelet/pods/87e9f925-c521-47fb-bc44-6243504b38ad/volumes" Jan 26 14:40:31 crc kubenswrapper[4922]: I0126 14:40:31.117977 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b04c51-33d8-4c83-9d5e-e11f2e4cf035" path="/var/lib/kubelet/pods/c9b04c51-33d8-4c83-9d5e-e11f2e4cf035/volumes" Jan 26 14:40:38 crc kubenswrapper[4922]: I0126 14:40:38.092701 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:40:38 crc kubenswrapper[4922]: E0126 14:40:38.093719 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:40:53 crc kubenswrapper[4922]: I0126 14:40:53.127326 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:40:53 crc kubenswrapper[4922]: I0126 14:40:53.715802 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"fd36364959c3d9ab20a2aac447f7d4fb3bff085c6a9e6c63789643890d6297ba"} Jan 26 14:41:03 crc kubenswrapper[4922]: I0126 14:41:03.835176 4922 generic.go:334] "Generic (PLEG): container finished" podID="e8749ec8-770d-498f-9ace-ad44e3385a36" containerID="6f6391d56023cb362843a48a889917496f828911353aef8e7a27d093f31b0b3a" exitCode=0 Jan 26 14:41:03 crc kubenswrapper[4922]: I0126 14:41:03.835305 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" event={"ID":"e8749ec8-770d-498f-9ace-ad44e3385a36","Type":"ContainerDied","Data":"6f6391d56023cb362843a48a889917496f828911353aef8e7a27d093f31b0b3a"} Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.290341 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.373373 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdkgc\" (UniqueName: \"kubernetes.io/projected/e8749ec8-770d-498f-9ace-ad44e3385a36-kube-api-access-rdkgc\") pod \"e8749ec8-770d-498f-9ace-ad44e3385a36\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.373513 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-ssh-key-openstack-edpm-ipam\") pod \"e8749ec8-770d-498f-9ace-ad44e3385a36\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.373568 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-inventory\") pod \"e8749ec8-770d-498f-9ace-ad44e3385a36\" (UID: \"e8749ec8-770d-498f-9ace-ad44e3385a36\") " Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.379519 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8749ec8-770d-498f-9ace-ad44e3385a36-kube-api-access-rdkgc" (OuterVolumeSpecName: "kube-api-access-rdkgc") pod "e8749ec8-770d-498f-9ace-ad44e3385a36" (UID: "e8749ec8-770d-498f-9ace-ad44e3385a36"). InnerVolumeSpecName "kube-api-access-rdkgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.404656 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-inventory" (OuterVolumeSpecName: "inventory") pod "e8749ec8-770d-498f-9ace-ad44e3385a36" (UID: "e8749ec8-770d-498f-9ace-ad44e3385a36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.421256 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8749ec8-770d-498f-9ace-ad44e3385a36" (UID: "e8749ec8-770d-498f-9ace-ad44e3385a36"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.476472 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.476637 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8749ec8-770d-498f-9ace-ad44e3385a36-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.476751 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdkgc\" (UniqueName: \"kubernetes.io/projected/e8749ec8-770d-498f-9ace-ad44e3385a36-kube-api-access-rdkgc\") on node \"crc\" DevicePath \"\"" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.860007 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" event={"ID":"e8749ec8-770d-498f-9ace-ad44e3385a36","Type":"ContainerDied","Data":"59a76d3c8b00b56ba0b29b868bef24b9cfb1f11e82f5ef63d6973b12c7d7bd63"} Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.860051 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59a76d3c8b00b56ba0b29b868bef24b9cfb1f11e82f5ef63d6973b12c7d7bd63" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.860115 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gcv99" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.981680 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8"] Jan 26 14:41:05 crc kubenswrapper[4922]: E0126 14:41:05.982206 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8749ec8-770d-498f-9ace-ad44e3385a36" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.982222 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8749ec8-770d-498f-9ace-ad44e3385a36" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.982488 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8749ec8-770d-498f-9ace-ad44e3385a36" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.984367 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.990679 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.991242 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.991259 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:41:05 crc kubenswrapper[4922]: I0126 14:41:05.993714 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.000987 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8"] Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.039225 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j7942"] Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.047762 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j7942"] Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.088376 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.088612 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.088814 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jc9z\" (UniqueName: \"kubernetes.io/projected/967a08bd-ab17-442c-bc7f-0a37ecd86306-kube-api-access-5jc9z\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.191126 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jc9z\" (UniqueName: \"kubernetes.io/projected/967a08bd-ab17-442c-bc7f-0a37ecd86306-kube-api-access-5jc9z\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.191246 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.191275 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.195987 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.202866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.214774 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jc9z\" (UniqueName: \"kubernetes.io/projected/967a08bd-ab17-442c-bc7f-0a37ecd86306-kube-api-access-5jc9z\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:06 crc kubenswrapper[4922]: I0126 14:41:06.355424 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:41:07 crc kubenswrapper[4922]: I0126 14:41:07.019297 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8"] Jan 26 14:41:07 crc kubenswrapper[4922]: I0126 14:41:07.103376 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226a1df4-9c6e-48d7-9c7f-b1d06f797a65" path="/var/lib/kubelet/pods/226a1df4-9c6e-48d7-9c7f-b1d06f797a65/volumes" Jan 26 14:41:07 crc kubenswrapper[4922]: I0126 14:41:07.880287 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" event={"ID":"967a08bd-ab17-442c-bc7f-0a37ecd86306","Type":"ContainerStarted","Data":"45c5fe674805e4a0c16de0892726207d93cf7c5e28f07890cd20dac97c6b1180"} Jan 26 14:41:07 crc kubenswrapper[4922]: I0126 14:41:07.880803 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" event={"ID":"967a08bd-ab17-442c-bc7f-0a37ecd86306","Type":"ContainerStarted","Data":"7480c3512ed06f777ca3323582d639836482999617de6278463297cf9384d887"} Jan 26 14:41:07 crc kubenswrapper[4922]: I0126 14:41:07.910231 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" podStartSLOduration=2.493563108 podStartE2EDuration="2.910204391s" podCreationTimestamp="2026-01-26 14:41:05 +0000 UTC" firstStartedPulling="2026-01-26 14:41:07.023677084 +0000 UTC m=+1884.225939856" lastFinishedPulling="2026-01-26 14:41:07.440318347 +0000 UTC m=+1884.642581139" observedRunningTime="2026-01-26 14:41:07.899340607 +0000 UTC m=+1885.101603369" watchObservedRunningTime="2026-01-26 14:41:07.910204391 +0000 UTC m=+1885.112467193" Jan 26 14:41:15 crc kubenswrapper[4922]: I0126 14:41:15.841684 4922 scope.go:117] "RemoveContainer" containerID="53c76d61f12f30462f88f9dfaeefda3545994c152b148aaa6ddbd414f699b1ef" Jan 26 14:41:15 crc kubenswrapper[4922]: I0126 14:41:15.883363 4922 scope.go:117] "RemoveContainer" containerID="30664a56011979da034c73622a8ad6629aeef01b41586ba4a5aaf1a5ccb6c6d0" Jan 26 14:41:15 crc kubenswrapper[4922]: I0126 14:41:15.959641 4922 scope.go:117] "RemoveContainer" containerID="d222c4e696659287b31a26a1d7a2454dde709d9bdafd05e138d116619c5b2a98" Jan 26 14:41:16 crc kubenswrapper[4922]: I0126 14:41:16.035958 4922 scope.go:117] "RemoveContainer" containerID="f16afd1356c41b87183dae0d377a616972f8e970c52d1000d825459d4d66caa5" Jan 26 14:41:16 crc kubenswrapper[4922]: I0126 14:41:16.070030 4922 scope.go:117] "RemoveContainer" containerID="d41b3d786f11a53f74fef6004b464538dc31896a553f0972f66e4ee33a477bd4" Jan 26 14:41:16 crc kubenswrapper[4922]: I0126 14:41:16.107604 4922 scope.go:117] "RemoveContainer" containerID="01345469de7665cf96a4b1f4e95a3e8b299448eb989fd623ccd8a3b98f138974" Jan 26 14:41:16 crc kubenswrapper[4922]: I0126 14:41:16.144688 4922 scope.go:117] "RemoveContainer" containerID="d6c2714360348fd442014ee38105d8cedb6d774c460a0c7a07590082aa781d7b" Jan 26 14:41:30 crc kubenswrapper[4922]: I0126 14:41:30.051929 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2g2kf"] Jan 26 14:41:30 crc kubenswrapper[4922]: I0126 14:41:30.064330 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2g2kf"] Jan 26 14:41:31 crc kubenswrapper[4922]: I0126 14:41:31.054279 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-td682"] Jan 26 14:41:31 crc kubenswrapper[4922]: I0126 14:41:31.069986 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-td682"] Jan 26 14:41:31 crc kubenswrapper[4922]: I0126 14:41:31.107569 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43199278-1695-4fee-a7e2-6ceb2cc304be" path="/var/lib/kubelet/pods/43199278-1695-4fee-a7e2-6ceb2cc304be/volumes" Jan 26 14:41:31 crc kubenswrapper[4922]: I0126 14:41:31.108396 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716f9795-18d5-4607-9f6c-09295bd2d003" path="/var/lib/kubelet/pods/716f9795-18d5-4607-9f6c-09295bd2d003/volumes" Jan 26 14:42:16 crc kubenswrapper[4922]: I0126 14:42:16.051532 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nt6h5"] Jan 26 14:42:16 crc kubenswrapper[4922]: I0126 14:42:16.058774 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nt6h5"] Jan 26 14:42:16 crc kubenswrapper[4922]: I0126 14:42:16.258929 4922 scope.go:117] "RemoveContainer" containerID="21f13c9e7358973c66c52e47437d696be5ec6f44a0acb68e670b89bb9019df73" Jan 26 14:42:16 crc kubenswrapper[4922]: I0126 14:42:16.299917 4922 scope.go:117] "RemoveContainer" containerID="ba7bc61144ff64344096775abcbc9dc6bfbd1b8aea09924921c6352fe04f725b" Jan 26 14:42:17 crc kubenswrapper[4922]: I0126 14:42:17.106759 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bcbc70c-9471-43ef-9411-bda440b81b54" path="/var/lib/kubelet/pods/2bcbc70c-9471-43ef-9411-bda440b81b54/volumes" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.650136 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8r275"] Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.652744 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.674855 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8r275"] Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.751494 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-catalog-content\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.751569 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bv6h\" (UniqueName: \"kubernetes.io/projected/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-kube-api-access-7bv6h\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.751952 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-utilities\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.853762 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-utilities\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.853902 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-catalog-content\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.853941 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bv6h\" (UniqueName: \"kubernetes.io/projected/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-kube-api-access-7bv6h\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.854549 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-utilities\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.854802 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-catalog-content\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.876937 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bv6h\" (UniqueName: \"kubernetes.io/projected/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-kube-api-access-7bv6h\") pod \"redhat-operators-8r275\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:26 crc kubenswrapper[4922]: I0126 14:42:26.980644 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:27 crc kubenswrapper[4922]: I0126 14:42:27.284760 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8r275"] Jan 26 14:42:27 crc kubenswrapper[4922]: I0126 14:42:27.747580 4922 generic.go:334] "Generic (PLEG): container finished" podID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerID="5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3" exitCode=0 Jan 26 14:42:27 crc kubenswrapper[4922]: I0126 14:42:27.747624 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerDied","Data":"5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3"} Jan 26 14:42:27 crc kubenswrapper[4922]: I0126 14:42:27.748385 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerStarted","Data":"33bc150366213150c30595923e9311496070db4eba7709df18473f650fa38e6b"} Jan 26 14:42:27 crc kubenswrapper[4922]: I0126 14:42:27.750916 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:42:34 crc kubenswrapper[4922]: I0126 14:42:34.834911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerStarted","Data":"1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f"} Jan 26 14:42:40 crc kubenswrapper[4922]: I0126 14:42:40.909919 4922 generic.go:334] "Generic (PLEG): container finished" podID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerID="1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f" exitCode=0 Jan 26 14:42:40 crc kubenswrapper[4922]: I0126 14:42:40.909962 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerDied","Data":"1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f"} Jan 26 14:42:41 crc kubenswrapper[4922]: I0126 14:42:41.921954 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerStarted","Data":"11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96"} Jan 26 14:42:41 crc kubenswrapper[4922]: I0126 14:42:41.946709 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8r275" podStartSLOduration=2.3132213200000002 podStartE2EDuration="15.946651938s" podCreationTimestamp="2026-01-26 14:42:26 +0000 UTC" firstStartedPulling="2026-01-26 14:42:27.750671478 +0000 UTC m=+1964.952934250" lastFinishedPulling="2026-01-26 14:42:41.384102106 +0000 UTC m=+1978.586364868" observedRunningTime="2026-01-26 14:42:41.942160867 +0000 UTC m=+1979.144423649" watchObservedRunningTime="2026-01-26 14:42:41.946651938 +0000 UTC m=+1979.148914780" Jan 26 14:42:46 crc kubenswrapper[4922]: I0126 14:42:46.981105 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:46 crc kubenswrapper[4922]: I0126 14:42:46.981762 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:48 crc kubenswrapper[4922]: I0126 14:42:48.046275 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8r275" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="registry-server" probeResult="failure" output=< Jan 26 14:42:48 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:42:48 crc kubenswrapper[4922]: > Jan 26 14:42:57 crc kubenswrapper[4922]: I0126 14:42:57.057768 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:57 crc kubenswrapper[4922]: I0126 14:42:57.118505 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:57 crc kubenswrapper[4922]: I0126 14:42:57.847870 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8r275"] Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.121809 4922 generic.go:334] "Generic (PLEG): container finished" podID="967a08bd-ab17-442c-bc7f-0a37ecd86306" containerID="45c5fe674805e4a0c16de0892726207d93cf7c5e28f07890cd20dac97c6b1180" exitCode=0 Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.122419 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8r275" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="registry-server" containerID="cri-o://11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96" gracePeriod=2 Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.123138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" event={"ID":"967a08bd-ab17-442c-bc7f-0a37ecd86306","Type":"ContainerDied","Data":"45c5fe674805e4a0c16de0892726207d93cf7c5e28f07890cd20dac97c6b1180"} Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.601801 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.732224 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bv6h\" (UniqueName: \"kubernetes.io/projected/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-kube-api-access-7bv6h\") pod \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.732364 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-catalog-content\") pod \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.732570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-utilities\") pod \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\" (UID: \"0b7c6de6-5bf9-49a8-b080-783f2f0443b4\") " Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.733443 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-utilities" (OuterVolumeSpecName: "utilities") pod "0b7c6de6-5bf9-49a8-b080-783f2f0443b4" (UID: "0b7c6de6-5bf9-49a8-b080-783f2f0443b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.753278 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-kube-api-access-7bv6h" (OuterVolumeSpecName: "kube-api-access-7bv6h") pod "0b7c6de6-5bf9-49a8-b080-783f2f0443b4" (UID: "0b7c6de6-5bf9-49a8-b080-783f2f0443b4"). InnerVolumeSpecName "kube-api-access-7bv6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.836385 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.836431 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bv6h\" (UniqueName: \"kubernetes.io/projected/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-kube-api-access-7bv6h\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.857874 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b7c6de6-5bf9-49a8-b080-783f2f0443b4" (UID: "0b7c6de6-5bf9-49a8-b080-783f2f0443b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:42:58 crc kubenswrapper[4922]: I0126 14:42:58.938490 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b7c6de6-5bf9-49a8-b080-783f2f0443b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.133416 4922 generic.go:334] "Generic (PLEG): container finished" podID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerID="11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96" exitCode=0 Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.133502 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r275" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.133496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerDied","Data":"11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96"} Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.133560 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r275" event={"ID":"0b7c6de6-5bf9-49a8-b080-783f2f0443b4","Type":"ContainerDied","Data":"33bc150366213150c30595923e9311496070db4eba7709df18473f650fa38e6b"} Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.133581 4922 scope.go:117] "RemoveContainer" containerID="11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.166157 4922 scope.go:117] "RemoveContainer" containerID="1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.168192 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8r275"] Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.180171 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8r275"] Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.198495 4922 scope.go:117] "RemoveContainer" containerID="5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.255533 4922 scope.go:117] "RemoveContainer" containerID="11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96" Jan 26 14:42:59 crc kubenswrapper[4922]: E0126 14:42:59.256542 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96\": container with ID starting with 11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96 not found: ID does not exist" containerID="11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.256576 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96"} err="failed to get container status \"11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96\": rpc error: code = NotFound desc = could not find container \"11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96\": container with ID starting with 11efb12a6077fb500cddbe5389abefa2ca186d9b9374e9c083c6ead5ec334e96 not found: ID does not exist" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.256601 4922 scope.go:117] "RemoveContainer" containerID="1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f" Jan 26 14:42:59 crc kubenswrapper[4922]: E0126 14:42:59.257263 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f\": container with ID starting with 1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f not found: ID does not exist" containerID="1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.257316 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f"} err="failed to get container status \"1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f\": rpc error: code = NotFound desc = could not find container \"1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f\": container with ID starting with 1b52752269e50cbbadd69b80f02a8c06ff1fbeae6916ff2ba2799f52e9a6956f not found: ID does not exist" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.257348 4922 scope.go:117] "RemoveContainer" containerID="5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3" Jan 26 14:42:59 crc kubenswrapper[4922]: E0126 14:42:59.257818 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3\": container with ID starting with 5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3 not found: ID does not exist" containerID="5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.257847 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3"} err="failed to get container status \"5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3\": rpc error: code = NotFound desc = could not find container \"5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3\": container with ID starting with 5fc713f49eecb044aea985338f5493eed75980183c86e859bf2814e2315dc1e3 not found: ID does not exist" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.580061 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.753789 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jc9z\" (UniqueName: \"kubernetes.io/projected/967a08bd-ab17-442c-bc7f-0a37ecd86306-kube-api-access-5jc9z\") pod \"967a08bd-ab17-442c-bc7f-0a37ecd86306\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.754055 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-ssh-key-openstack-edpm-ipam\") pod \"967a08bd-ab17-442c-bc7f-0a37ecd86306\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.754360 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-inventory\") pod \"967a08bd-ab17-442c-bc7f-0a37ecd86306\" (UID: \"967a08bd-ab17-442c-bc7f-0a37ecd86306\") " Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.762533 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967a08bd-ab17-442c-bc7f-0a37ecd86306-kube-api-access-5jc9z" (OuterVolumeSpecName: "kube-api-access-5jc9z") pod "967a08bd-ab17-442c-bc7f-0a37ecd86306" (UID: "967a08bd-ab17-442c-bc7f-0a37ecd86306"). InnerVolumeSpecName "kube-api-access-5jc9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.786908 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-inventory" (OuterVolumeSpecName: "inventory") pod "967a08bd-ab17-442c-bc7f-0a37ecd86306" (UID: "967a08bd-ab17-442c-bc7f-0a37ecd86306"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.789754 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "967a08bd-ab17-442c-bc7f-0a37ecd86306" (UID: "967a08bd-ab17-442c-bc7f-0a37ecd86306"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.857166 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.857205 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/967a08bd-ab17-442c-bc7f-0a37ecd86306-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:42:59 crc kubenswrapper[4922]: I0126 14:42:59.857217 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jc9z\" (UniqueName: \"kubernetes.io/projected/967a08bd-ab17-442c-bc7f-0a37ecd86306-kube-api-access-5jc9z\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.144971 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.144992 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8" event={"ID":"967a08bd-ab17-442c-bc7f-0a37ecd86306","Type":"ContainerDied","Data":"7480c3512ed06f777ca3323582d639836482999617de6278463297cf9384d887"} Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.145033 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7480c3512ed06f777ca3323582d639836482999617de6278463297cf9384d887" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.258710 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f"] Jan 26 14:43:00 crc kubenswrapper[4922]: E0126 14:43:00.259426 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="extract-content" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.259471 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="extract-content" Jan 26 14:43:00 crc kubenswrapper[4922]: E0126 14:43:00.259514 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="extract-utilities" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.259527 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="extract-utilities" Jan 26 14:43:00 crc kubenswrapper[4922]: E0126 14:43:00.259556 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="registry-server" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.259570 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="registry-server" Jan 26 14:43:00 crc kubenswrapper[4922]: E0126 14:43:00.259597 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967a08bd-ab17-442c-bc7f-0a37ecd86306" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.259610 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="967a08bd-ab17-442c-bc7f-0a37ecd86306" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.259969 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="967a08bd-ab17-442c-bc7f-0a37ecd86306" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.260029 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" containerName="registry-server" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.261353 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.263591 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.263657 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.264105 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.265857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.278744 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f"] Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.366793 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.366850 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmwh\" (UniqueName: \"kubernetes.io/projected/460930ff-ef82-4c8d-8f3b-36551f8fb401-kube-api-access-csmwh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.367127 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.468943 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.469010 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmwh\" (UniqueName: \"kubernetes.io/projected/460930ff-ef82-4c8d-8f3b-36551f8fb401-kube-api-access-csmwh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.469184 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.473521 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.476056 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.489366 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmwh\" (UniqueName: \"kubernetes.io/projected/460930ff-ef82-4c8d-8f3b-36551f8fb401-kube-api-access-csmwh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:00 crc kubenswrapper[4922]: I0126 14:43:00.579195 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:01 crc kubenswrapper[4922]: I0126 14:43:01.105718 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7c6de6-5bf9-49a8-b080-783f2f0443b4" path="/var/lib/kubelet/pods/0b7c6de6-5bf9-49a8-b080-783f2f0443b4/volumes" Jan 26 14:43:01 crc kubenswrapper[4922]: W0126 14:43:01.136774 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod460930ff_ef82_4c8d_8f3b_36551f8fb401.slice/crio-f2d5893de4f86058a6ff8fc8e4bd49dcef1721c8787af2d6373c0876478f9fa6 WatchSource:0}: Error finding container f2d5893de4f86058a6ff8fc8e4bd49dcef1721c8787af2d6373c0876478f9fa6: Status 404 returned error can't find the container with id f2d5893de4f86058a6ff8fc8e4bd49dcef1721c8787af2d6373c0876478f9fa6 Jan 26 14:43:01 crc kubenswrapper[4922]: I0126 14:43:01.136914 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f"] Jan 26 14:43:01 crc kubenswrapper[4922]: I0126 14:43:01.155111 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" event={"ID":"460930ff-ef82-4c8d-8f3b-36551f8fb401","Type":"ContainerStarted","Data":"f2d5893de4f86058a6ff8fc8e4bd49dcef1721c8787af2d6373c0876478f9fa6"} Jan 26 14:43:03 crc kubenswrapper[4922]: I0126 14:43:03.177036 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" event={"ID":"460930ff-ef82-4c8d-8f3b-36551f8fb401","Type":"ContainerStarted","Data":"e1a5743b020258e9f033f9d61f9a300545248a7d25d295478a2e0fd061dfa88e"} Jan 26 14:43:03 crc kubenswrapper[4922]: I0126 14:43:03.204594 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" podStartSLOduration=2.527633147 podStartE2EDuration="3.204573678s" podCreationTimestamp="2026-01-26 14:43:00 +0000 UTC" firstStartedPulling="2026-01-26 14:43:01.140003642 +0000 UTC m=+1998.342266414" lastFinishedPulling="2026-01-26 14:43:01.816944173 +0000 UTC m=+1999.019206945" observedRunningTime="2026-01-26 14:43:03.195950965 +0000 UTC m=+2000.398213737" watchObservedRunningTime="2026-01-26 14:43:03.204573678 +0000 UTC m=+2000.406836470" Jan 26 14:43:08 crc kubenswrapper[4922]: I0126 14:43:08.226587 4922 generic.go:334] "Generic (PLEG): container finished" podID="460930ff-ef82-4c8d-8f3b-36551f8fb401" containerID="e1a5743b020258e9f033f9d61f9a300545248a7d25d295478a2e0fd061dfa88e" exitCode=0 Jan 26 14:43:08 crc kubenswrapper[4922]: I0126 14:43:08.226738 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" event={"ID":"460930ff-ef82-4c8d-8f3b-36551f8fb401","Type":"ContainerDied","Data":"e1a5743b020258e9f033f9d61f9a300545248a7d25d295478a2e0fd061dfa88e"} Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.635868 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.695564 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csmwh\" (UniqueName: \"kubernetes.io/projected/460930ff-ef82-4c8d-8f3b-36551f8fb401-kube-api-access-csmwh\") pod \"460930ff-ef82-4c8d-8f3b-36551f8fb401\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.695671 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-ssh-key-openstack-edpm-ipam\") pod \"460930ff-ef82-4c8d-8f3b-36551f8fb401\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.695698 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-inventory\") pod \"460930ff-ef82-4c8d-8f3b-36551f8fb401\" (UID: \"460930ff-ef82-4c8d-8f3b-36551f8fb401\") " Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.707250 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460930ff-ef82-4c8d-8f3b-36551f8fb401-kube-api-access-csmwh" (OuterVolumeSpecName: "kube-api-access-csmwh") pod "460930ff-ef82-4c8d-8f3b-36551f8fb401" (UID: "460930ff-ef82-4c8d-8f3b-36551f8fb401"). InnerVolumeSpecName "kube-api-access-csmwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.726987 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-inventory" (OuterVolumeSpecName: "inventory") pod "460930ff-ef82-4c8d-8f3b-36551f8fb401" (UID: "460930ff-ef82-4c8d-8f3b-36551f8fb401"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.728668 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "460930ff-ef82-4c8d-8f3b-36551f8fb401" (UID: "460930ff-ef82-4c8d-8f3b-36551f8fb401"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.797305 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csmwh\" (UniqueName: \"kubernetes.io/projected/460930ff-ef82-4c8d-8f3b-36551f8fb401-kube-api-access-csmwh\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.797337 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:09 crc kubenswrapper[4922]: I0126 14:43:09.797347 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/460930ff-ef82-4c8d-8f3b-36551f8fb401-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.249338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" event={"ID":"460930ff-ef82-4c8d-8f3b-36551f8fb401","Type":"ContainerDied","Data":"f2d5893de4f86058a6ff8fc8e4bd49dcef1721c8787af2d6373c0876478f9fa6"} Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.249379 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d5893de4f86058a6ff8fc8e4bd49dcef1721c8787af2d6373c0876478f9fa6" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.249400 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.333321 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs"] Jan 26 14:43:10 crc kubenswrapper[4922]: E0126 14:43:10.333787 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460930ff-ef82-4c8d-8f3b-36551f8fb401" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.333803 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="460930ff-ef82-4c8d-8f3b-36551f8fb401" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.333974 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="460930ff-ef82-4c8d-8f3b-36551f8fb401" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.334882 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.336905 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.338849 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.339027 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.339085 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.345203 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs"] Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.424125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6l4\" (UniqueName: \"kubernetes.io/projected/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-kube-api-access-5s6l4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.424307 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.424522 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.527788 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6l4\" (UniqueName: \"kubernetes.io/projected/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-kube-api-access-5s6l4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.527946 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.528162 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.534502 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.535527 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.548279 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6l4\" (UniqueName: \"kubernetes.io/projected/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-kube-api-access-5s6l4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rl9qs\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:10 crc kubenswrapper[4922]: I0126 14:43:10.657006 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:11 crc kubenswrapper[4922]: I0126 14:43:11.178275 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs"] Jan 26 14:43:11 crc kubenswrapper[4922]: I0126 14:43:11.257797 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" event={"ID":"ef2f11f3-ab6f-449f-9bf8-1306119e67ad","Type":"ContainerStarted","Data":"49921ef53882f45762c3c3f905d10e6a9bbce219b38e6a49707909581395692e"} Jan 26 14:43:11 crc kubenswrapper[4922]: I0126 14:43:11.308366 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:43:11 crc kubenswrapper[4922]: I0126 14:43:11.308474 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:43:12 crc kubenswrapper[4922]: I0126 14:43:12.268603 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" event={"ID":"ef2f11f3-ab6f-449f-9bf8-1306119e67ad","Type":"ContainerStarted","Data":"e65489e8febf2214a89ddc846c084d910a127694701ae42084c0733b6058c76c"} Jan 26 14:43:12 crc kubenswrapper[4922]: I0126 14:43:12.292523 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" podStartSLOduration=1.850640498 podStartE2EDuration="2.292503635s" podCreationTimestamp="2026-01-26 14:43:10 +0000 UTC" firstStartedPulling="2026-01-26 14:43:11.183341008 +0000 UTC m=+2008.385603770" lastFinishedPulling="2026-01-26 14:43:11.625204135 +0000 UTC m=+2008.827466907" observedRunningTime="2026-01-26 14:43:12.281008326 +0000 UTC m=+2009.483271108" watchObservedRunningTime="2026-01-26 14:43:12.292503635 +0000 UTC m=+2009.494766407" Jan 26 14:43:16 crc kubenswrapper[4922]: I0126 14:43:16.483050 4922 scope.go:117] "RemoveContainer" containerID="f1e0655729c41ea7b98b471b04d82dc44542b04edfd647d7a49f4689f5ee5901" Jan 26 14:43:41 crc kubenswrapper[4922]: I0126 14:43:41.306596 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:43:41 crc kubenswrapper[4922]: I0126 14:43:41.307140 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:43:53 crc kubenswrapper[4922]: I0126 14:43:53.651563 4922 generic.go:334] "Generic (PLEG): container finished" podID="ef2f11f3-ab6f-449f-9bf8-1306119e67ad" containerID="e65489e8febf2214a89ddc846c084d910a127694701ae42084c0733b6058c76c" exitCode=0 Jan 26 14:43:53 crc kubenswrapper[4922]: I0126 14:43:53.651650 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" event={"ID":"ef2f11f3-ab6f-449f-9bf8-1306119e67ad","Type":"ContainerDied","Data":"e65489e8febf2214a89ddc846c084d910a127694701ae42084c0733b6058c76c"} Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.130224 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.244622 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s6l4\" (UniqueName: \"kubernetes.io/projected/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-kube-api-access-5s6l4\") pod \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.244912 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-inventory\") pod \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.245024 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-ssh-key-openstack-edpm-ipam\") pod \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\" (UID: \"ef2f11f3-ab6f-449f-9bf8-1306119e67ad\") " Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.251692 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-kube-api-access-5s6l4" (OuterVolumeSpecName: "kube-api-access-5s6l4") pod "ef2f11f3-ab6f-449f-9bf8-1306119e67ad" (UID: "ef2f11f3-ab6f-449f-9bf8-1306119e67ad"). InnerVolumeSpecName "kube-api-access-5s6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.276012 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ef2f11f3-ab6f-449f-9bf8-1306119e67ad" (UID: "ef2f11f3-ab6f-449f-9bf8-1306119e67ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.291034 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-inventory" (OuterVolumeSpecName: "inventory") pod "ef2f11f3-ab6f-449f-9bf8-1306119e67ad" (UID: "ef2f11f3-ab6f-449f-9bf8-1306119e67ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.348165 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.348379 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s6l4\" (UniqueName: \"kubernetes.io/projected/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-kube-api-access-5s6l4\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.348408 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef2f11f3-ab6f-449f-9bf8-1306119e67ad-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.672903 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" event={"ID":"ef2f11f3-ab6f-449f-9bf8-1306119e67ad","Type":"ContainerDied","Data":"49921ef53882f45762c3c3f905d10e6a9bbce219b38e6a49707909581395692e"} Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.672943 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rl9qs" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.672957 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49921ef53882f45762c3c3f905d10e6a9bbce219b38e6a49707909581395692e" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.770649 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg"] Jan 26 14:43:55 crc kubenswrapper[4922]: E0126 14:43:55.771315 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2f11f3-ab6f-449f-9bf8-1306119e67ad" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.771336 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2f11f3-ab6f-449f-9bf8-1306119e67ad" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.771573 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2f11f3-ab6f-449f-9bf8-1306119e67ad" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.772389 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.774498 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.774667 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.774957 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.775262 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.795221 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg"] Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.859464 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkznb\" (UniqueName: \"kubernetes.io/projected/cd6ac053-8747-40cb-87df-2ad523dafbf0-kube-api-access-tkznb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.859672 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.859700 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.960936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.960995 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.961033 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkznb\" (UniqueName: \"kubernetes.io/projected/cd6ac053-8747-40cb-87df-2ad523dafbf0-kube-api-access-tkznb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.966094 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.967458 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:55 crc kubenswrapper[4922]: I0126 14:43:55.985805 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkznb\" (UniqueName: \"kubernetes.io/projected/cd6ac053-8747-40cb-87df-2ad523dafbf0-kube-api-access-tkznb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xsptg\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:56 crc kubenswrapper[4922]: I0126 14:43:56.089346 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:43:56 crc kubenswrapper[4922]: I0126 14:43:56.618236 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg"] Jan 26 14:43:56 crc kubenswrapper[4922]: I0126 14:43:56.682052 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" event={"ID":"cd6ac053-8747-40cb-87df-2ad523dafbf0","Type":"ContainerStarted","Data":"b58d18c4dbabfcf4c77136b5fd513b5dd991bdde2f78266ef19646f902800271"} Jan 26 14:43:57 crc kubenswrapper[4922]: I0126 14:43:57.692819 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" event={"ID":"cd6ac053-8747-40cb-87df-2ad523dafbf0","Type":"ContainerStarted","Data":"c02b209e362a21f37e0fded828b182038e3dd675216c034d7785c408a0c195aa"} Jan 26 14:43:57 crc kubenswrapper[4922]: I0126 14:43:57.720023 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" podStartSLOduration=2.27598686 podStartE2EDuration="2.720000984s" podCreationTimestamp="2026-01-26 14:43:55 +0000 UTC" firstStartedPulling="2026-01-26 14:43:56.624900195 +0000 UTC m=+2053.827162967" lastFinishedPulling="2026-01-26 14:43:57.068914309 +0000 UTC m=+2054.271177091" observedRunningTime="2026-01-26 14:43:57.707761055 +0000 UTC m=+2054.910023907" watchObservedRunningTime="2026-01-26 14:43:57.720000984 +0000 UTC m=+2054.922263766" Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.306451 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.307033 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.307096 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.307882 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd36364959c3d9ab20a2aac447f7d4fb3bff085c6a9e6c63789643890d6297ba"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.307933 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://fd36364959c3d9ab20a2aac447f7d4fb3bff085c6a9e6c63789643890d6297ba" gracePeriod=600 Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.841861 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="fd36364959c3d9ab20a2aac447f7d4fb3bff085c6a9e6c63789643890d6297ba" exitCode=0 Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.841964 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"fd36364959c3d9ab20a2aac447f7d4fb3bff085c6a9e6c63789643890d6297ba"} Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.842269 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2"} Jan 26 14:44:11 crc kubenswrapper[4922]: I0126 14:44:11.842294 4922 scope.go:117] "RemoveContainer" containerID="1786208cd0f7bd9cb48d1fb6ac22d2b7ea1cec344af2afd06423f7acdb7c7c70" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.132504 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92xcn"] Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.169450 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92xcn"] Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.169574 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.214541 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-catalog-content\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.214880 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqnb\" (UniqueName: \"kubernetes.io/projected/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-kube-api-access-4jqnb\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.214938 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-utilities\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.316978 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-catalog-content\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.317495 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-catalog-content\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.317741 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqnb\" (UniqueName: \"kubernetes.io/projected/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-kube-api-access-4jqnb\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.317799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-utilities\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.318118 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-utilities\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.337249 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqnb\" (UniqueName: \"kubernetes.io/projected/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-kube-api-access-4jqnb\") pod \"redhat-marketplace-92xcn\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.491828 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:39 crc kubenswrapper[4922]: I0126 14:44:39.943219 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92xcn"] Jan 26 14:44:40 crc kubenswrapper[4922]: I0126 14:44:40.193315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerStarted","Data":"d9def0b3df085427f7c8e02173cf42793f0054b9f33db209ae4ce7ae22550100"} Jan 26 14:44:41 crc kubenswrapper[4922]: I0126 14:44:41.201639 4922 generic.go:334] "Generic (PLEG): container finished" podID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerID="6131608d6ed35e5ff5c8bcd6332015837a354723c098643667623c0dbd717cdc" exitCode=0 Jan 26 14:44:41 crc kubenswrapper[4922]: I0126 14:44:41.201690 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerDied","Data":"6131608d6ed35e5ff5c8bcd6332015837a354723c098643667623c0dbd717cdc"} Jan 26 14:44:42 crc kubenswrapper[4922]: I0126 14:44:42.230529 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerStarted","Data":"6eb82925e095dc359f90de3fa9e23d7866d15b88e829ef824387937378528eaf"} Jan 26 14:44:43 crc kubenswrapper[4922]: I0126 14:44:43.245816 4922 generic.go:334] "Generic (PLEG): container finished" podID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerID="6eb82925e095dc359f90de3fa9e23d7866d15b88e829ef824387937378528eaf" exitCode=0 Jan 26 14:44:43 crc kubenswrapper[4922]: I0126 14:44:43.245867 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerDied","Data":"6eb82925e095dc359f90de3fa9e23d7866d15b88e829ef824387937378528eaf"} Jan 26 14:44:44 crc kubenswrapper[4922]: I0126 14:44:44.255582 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerStarted","Data":"74dc5e40d83c9ec1ddbd794d9285bf3d1b59c315f510ded39164cfa2fb435e5a"} Jan 26 14:44:49 crc kubenswrapper[4922]: I0126 14:44:49.492011 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:49 crc kubenswrapper[4922]: I0126 14:44:49.492632 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:49 crc kubenswrapper[4922]: I0126 14:44:49.563792 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:49 crc kubenswrapper[4922]: I0126 14:44:49.588911 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92xcn" podStartSLOduration=8.147400477 podStartE2EDuration="10.588892126s" podCreationTimestamp="2026-01-26 14:44:39 +0000 UTC" firstStartedPulling="2026-01-26 14:44:41.203562793 +0000 UTC m=+2098.405825575" lastFinishedPulling="2026-01-26 14:44:43.645054452 +0000 UTC m=+2100.847317224" observedRunningTime="2026-01-26 14:44:44.30668229 +0000 UTC m=+2101.508945072" watchObservedRunningTime="2026-01-26 14:44:49.588892126 +0000 UTC m=+2106.791154898" Jan 26 14:44:50 crc kubenswrapper[4922]: I0126 14:44:50.364264 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:50 crc kubenswrapper[4922]: I0126 14:44:50.423400 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92xcn"] Jan 26 14:44:52 crc kubenswrapper[4922]: I0126 14:44:52.324367 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92xcn" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="registry-server" containerID="cri-o://74dc5e40d83c9ec1ddbd794d9285bf3d1b59c315f510ded39164cfa2fb435e5a" gracePeriod=2 Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.336051 4922 generic.go:334] "Generic (PLEG): container finished" podID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerID="74dc5e40d83c9ec1ddbd794d9285bf3d1b59c315f510ded39164cfa2fb435e5a" exitCode=0 Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.336139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerDied","Data":"74dc5e40d83c9ec1ddbd794d9285bf3d1b59c315f510ded39164cfa2fb435e5a"} Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.336163 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92xcn" event={"ID":"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8","Type":"ContainerDied","Data":"d9def0b3df085427f7c8e02173cf42793f0054b9f33db209ae4ce7ae22550100"} Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.336184 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9def0b3df085427f7c8e02173cf42793f0054b9f33db209ae4ce7ae22550100" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.350724 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.405838 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-utilities\") pod \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.406778 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-utilities" (OuterVolumeSpecName: "utilities") pod "610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" (UID: "610e4a98-3ee5-436e-ae42-3a0e37d7b8b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.406975 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jqnb\" (UniqueName: \"kubernetes.io/projected/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-kube-api-access-4jqnb\") pod \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.407923 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-catalog-content\") pod \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\" (UID: \"610e4a98-3ee5-436e-ae42-3a0e37d7b8b8\") " Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.408760 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.413326 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-kube-api-access-4jqnb" (OuterVolumeSpecName: "kube-api-access-4jqnb") pod "610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" (UID: "610e4a98-3ee5-436e-ae42-3a0e37d7b8b8"). InnerVolumeSpecName "kube-api-access-4jqnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.431419 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" (UID: "610e4a98-3ee5-436e-ae42-3a0e37d7b8b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.509838 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:44:53 crc kubenswrapper[4922]: I0126 14:44:53.509869 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jqnb\" (UniqueName: \"kubernetes.io/projected/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8-kube-api-access-4jqnb\") on node \"crc\" DevicePath \"\"" Jan 26 14:44:54 crc kubenswrapper[4922]: I0126 14:44:54.347466 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92xcn" Jan 26 14:44:54 crc kubenswrapper[4922]: I0126 14:44:54.394595 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92xcn"] Jan 26 14:44:54 crc kubenswrapper[4922]: I0126 14:44:54.403138 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92xcn"] Jan 26 14:44:55 crc kubenswrapper[4922]: I0126 14:44:55.105535 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" path="/var/lib/kubelet/pods/610e4a98-3ee5-436e-ae42-3a0e37d7b8b8/volumes" Jan 26 14:44:56 crc kubenswrapper[4922]: I0126 14:44:56.369878 4922 generic.go:334] "Generic (PLEG): container finished" podID="cd6ac053-8747-40cb-87df-2ad523dafbf0" containerID="c02b209e362a21f37e0fded828b182038e3dd675216c034d7785c408a0c195aa" exitCode=0 Jan 26 14:44:56 crc kubenswrapper[4922]: I0126 14:44:56.369990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" event={"ID":"cd6ac053-8747-40cb-87df-2ad523dafbf0","Type":"ContainerDied","Data":"c02b209e362a21f37e0fded828b182038e3dd675216c034d7785c408a0c195aa"} Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.806690 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.898884 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-inventory\") pod \"cd6ac053-8747-40cb-87df-2ad523dafbf0\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.898979 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-ssh-key-openstack-edpm-ipam\") pod \"cd6ac053-8747-40cb-87df-2ad523dafbf0\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.899091 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkznb\" (UniqueName: \"kubernetes.io/projected/cd6ac053-8747-40cb-87df-2ad523dafbf0-kube-api-access-tkznb\") pod \"cd6ac053-8747-40cb-87df-2ad523dafbf0\" (UID: \"cd6ac053-8747-40cb-87df-2ad523dafbf0\") " Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.904412 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6ac053-8747-40cb-87df-2ad523dafbf0-kube-api-access-tkznb" (OuterVolumeSpecName: "kube-api-access-tkznb") pod "cd6ac053-8747-40cb-87df-2ad523dafbf0" (UID: "cd6ac053-8747-40cb-87df-2ad523dafbf0"). InnerVolumeSpecName "kube-api-access-tkznb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.927441 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-inventory" (OuterVolumeSpecName: "inventory") pod "cd6ac053-8747-40cb-87df-2ad523dafbf0" (UID: "cd6ac053-8747-40cb-87df-2ad523dafbf0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:44:57 crc kubenswrapper[4922]: I0126 14:44:57.934185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd6ac053-8747-40cb-87df-2ad523dafbf0" (UID: "cd6ac053-8747-40cb-87df-2ad523dafbf0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.001477 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkznb\" (UniqueName: \"kubernetes.io/projected/cd6ac053-8747-40cb-87df-2ad523dafbf0-kube-api-access-tkznb\") on node \"crc\" DevicePath \"\"" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.001508 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.001518 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd6ac053-8747-40cb-87df-2ad523dafbf0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.389566 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" event={"ID":"cd6ac053-8747-40cb-87df-2ad523dafbf0","Type":"ContainerDied","Data":"b58d18c4dbabfcf4c77136b5fd513b5dd991bdde2f78266ef19646f902800271"} Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.389611 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58d18c4dbabfcf4c77136b5fd513b5dd991bdde2f78266ef19646f902800271" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.389713 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xsptg" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.479323 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mdksr"] Jan 26 14:44:58 crc kubenswrapper[4922]: E0126 14:44:58.479704 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="registry-server" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.479722 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="registry-server" Jan 26 14:44:58 crc kubenswrapper[4922]: E0126 14:44:58.479743 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ac053-8747-40cb-87df-2ad523dafbf0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.479751 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ac053-8747-40cb-87df-2ad523dafbf0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:44:58 crc kubenswrapper[4922]: E0126 14:44:58.479781 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="extract-utilities" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.479789 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="extract-utilities" Jan 26 14:44:58 crc kubenswrapper[4922]: E0126 14:44:58.479800 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="extract-content" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.479806 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="extract-content" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.480014 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6ac053-8747-40cb-87df-2ad523dafbf0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.480038 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="610e4a98-3ee5-436e-ae42-3a0e37d7b8b8" containerName="registry-server" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.480979 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.483489 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.483703 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.484093 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.484283 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.498139 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mdksr"] Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.611158 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lm2\" (UniqueName: \"kubernetes.io/projected/377f1114-a9f8-4b98-96c9-71f827483095-kube-api-access-x8lm2\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.611238 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.611457 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.713621 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lm2\" (UniqueName: \"kubernetes.io/projected/377f1114-a9f8-4b98-96c9-71f827483095-kube-api-access-x8lm2\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.713674 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.713749 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.721194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.728905 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.738677 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lm2\" (UniqueName: \"kubernetes.io/projected/377f1114-a9f8-4b98-96c9-71f827483095-kube-api-access-x8lm2\") pod \"ssh-known-hosts-edpm-deployment-mdksr\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:58 crc kubenswrapper[4922]: I0126 14:44:58.805236 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:44:59 crc kubenswrapper[4922]: I0126 14:44:59.159583 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mdksr"] Jan 26 14:44:59 crc kubenswrapper[4922]: I0126 14:44:59.397703 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" event={"ID":"377f1114-a9f8-4b98-96c9-71f827483095","Type":"ContainerStarted","Data":"5422fbbe3d19b17b096844093f0983d8e4bb7e58a9c2ee1c92f25cc5f8b7acc4"} Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.143523 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t"] Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.145747 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.148656 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.157795 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t"] Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.159224 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.244530 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-config-volume\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.244625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mq87\" (UniqueName: \"kubernetes.io/projected/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-kube-api-access-5mq87\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.244695 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-secret-volume\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.346641 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-config-volume\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.346713 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mq87\" (UniqueName: \"kubernetes.io/projected/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-kube-api-access-5mq87\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.346772 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-secret-volume\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.347978 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-config-volume\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.355552 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-secret-volume\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.365644 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mq87\" (UniqueName: \"kubernetes.io/projected/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-kube-api-access-5mq87\") pod \"collect-profiles-29490645-zlj7t\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.477568 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:00 crc kubenswrapper[4922]: I0126 14:45:00.988459 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t"] Jan 26 14:45:01 crc kubenswrapper[4922]: W0126 14:45:01.001827 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6825dd3e_7e80_4f5e_846e_f9ed3c7e9d5c.slice/crio-2551f6e60961b305671922135b734c78a8cbcf15eafc381d15142ea5fd86ceef WatchSource:0}: Error finding container 2551f6e60961b305671922135b734c78a8cbcf15eafc381d15142ea5fd86ceef: Status 404 returned error can't find the container with id 2551f6e60961b305671922135b734c78a8cbcf15eafc381d15142ea5fd86ceef Jan 26 14:45:01 crc kubenswrapper[4922]: I0126 14:45:01.414834 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" event={"ID":"377f1114-a9f8-4b98-96c9-71f827483095","Type":"ContainerStarted","Data":"ebccb381f136093303df0e6ef7528fa663ca1e41025e07440eb706797af7d526"} Jan 26 14:45:01 crc kubenswrapper[4922]: I0126 14:45:01.416771 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" event={"ID":"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c","Type":"ContainerStarted","Data":"535ab1ea368304a28468c09383b87e151d44939f96cfefef531cd842780dd643"} Jan 26 14:45:01 crc kubenswrapper[4922]: I0126 14:45:01.416804 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" event={"ID":"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c","Type":"ContainerStarted","Data":"2551f6e60961b305671922135b734c78a8cbcf15eafc381d15142ea5fd86ceef"} Jan 26 14:45:01 crc kubenswrapper[4922]: I0126 14:45:01.439177 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" podStartSLOduration=1.718480923 podStartE2EDuration="3.4391558s" podCreationTimestamp="2026-01-26 14:44:58 +0000 UTC" firstStartedPulling="2026-01-26 14:44:59.166974186 +0000 UTC m=+2116.369236958" lastFinishedPulling="2026-01-26 14:45:00.887649073 +0000 UTC m=+2118.089911835" observedRunningTime="2026-01-26 14:45:01.436104108 +0000 UTC m=+2118.638366890" watchObservedRunningTime="2026-01-26 14:45:01.4391558 +0000 UTC m=+2118.641418572" Jan 26 14:45:01 crc kubenswrapper[4922]: I0126 14:45:01.453255 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" podStartSLOduration=1.4532388680000001 podStartE2EDuration="1.453238868s" podCreationTimestamp="2026-01-26 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:45:01.446982509 +0000 UTC m=+2118.649245281" watchObservedRunningTime="2026-01-26 14:45:01.453238868 +0000 UTC m=+2118.655501640" Jan 26 14:45:02 crc kubenswrapper[4922]: I0126 14:45:02.424686 4922 generic.go:334] "Generic (PLEG): container finished" podID="6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" containerID="535ab1ea368304a28468c09383b87e151d44939f96cfefef531cd842780dd643" exitCode=0 Jan 26 14:45:02 crc kubenswrapper[4922]: I0126 14:45:02.424790 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" event={"ID":"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c","Type":"ContainerDied","Data":"535ab1ea368304a28468c09383b87e151d44939f96cfefef531cd842780dd643"} Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.865095 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.915614 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-secret-volume\") pod \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.915859 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mq87\" (UniqueName: \"kubernetes.io/projected/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-kube-api-access-5mq87\") pod \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.916271 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-config-volume\") pod \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\" (UID: \"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c\") " Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.917161 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-config-volume" (OuterVolumeSpecName: "config-volume") pod "6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" (UID: "6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.922340 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" (UID: "6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:03 crc kubenswrapper[4922]: I0126 14:45:03.927479 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-kube-api-access-5mq87" (OuterVolumeSpecName: "kube-api-access-5mq87") pod "6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" (UID: "6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c"). InnerVolumeSpecName "kube-api-access-5mq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.018584 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.018617 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.018627 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mq87\" (UniqueName: \"kubernetes.io/projected/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c-kube-api-access-5mq87\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.461091 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" event={"ID":"6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c","Type":"ContainerDied","Data":"2551f6e60961b305671922135b734c78a8cbcf15eafc381d15142ea5fd86ceef"} Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.461130 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2551f6e60961b305671922135b734c78a8cbcf15eafc381d15142ea5fd86ceef" Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.461150 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t" Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.952092 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv"] Jan 26 14:45:04 crc kubenswrapper[4922]: I0126 14:45:04.960576 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490600-74vcv"] Jan 26 14:45:05 crc kubenswrapper[4922]: I0126 14:45:05.103056 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538a74fd-fc9a-49f8-83cc-c33a83d15081" path="/var/lib/kubelet/pods/538a74fd-fc9a-49f8-83cc-c33a83d15081/volumes" Jan 26 14:45:08 crc kubenswrapper[4922]: I0126 14:45:08.494548 4922 generic.go:334] "Generic (PLEG): container finished" podID="377f1114-a9f8-4b98-96c9-71f827483095" containerID="ebccb381f136093303df0e6ef7528fa663ca1e41025e07440eb706797af7d526" exitCode=0 Jan 26 14:45:08 crc kubenswrapper[4922]: I0126 14:45:08.494613 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" event={"ID":"377f1114-a9f8-4b98-96c9-71f827483095","Type":"ContainerDied","Data":"ebccb381f136093303df0e6ef7528fa663ca1e41025e07440eb706797af7d526"} Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.016569 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.090741 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-ssh-key-openstack-edpm-ipam\") pod \"377f1114-a9f8-4b98-96c9-71f827483095\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.090874 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-inventory-0\") pod \"377f1114-a9f8-4b98-96c9-71f827483095\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.090904 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8lm2\" (UniqueName: \"kubernetes.io/projected/377f1114-a9f8-4b98-96c9-71f827483095-kube-api-access-x8lm2\") pod \"377f1114-a9f8-4b98-96c9-71f827483095\" (UID: \"377f1114-a9f8-4b98-96c9-71f827483095\") " Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.098141 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377f1114-a9f8-4b98-96c9-71f827483095-kube-api-access-x8lm2" (OuterVolumeSpecName: "kube-api-access-x8lm2") pod "377f1114-a9f8-4b98-96c9-71f827483095" (UID: "377f1114-a9f8-4b98-96c9-71f827483095"). InnerVolumeSpecName "kube-api-access-x8lm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.122958 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "377f1114-a9f8-4b98-96c9-71f827483095" (UID: "377f1114-a9f8-4b98-96c9-71f827483095"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.143140 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "377f1114-a9f8-4b98-96c9-71f827483095" (UID: "377f1114-a9f8-4b98-96c9-71f827483095"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.194015 4922 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.194043 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8lm2\" (UniqueName: \"kubernetes.io/projected/377f1114-a9f8-4b98-96c9-71f827483095-kube-api-access-x8lm2\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.194053 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/377f1114-a9f8-4b98-96c9-71f827483095-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.514216 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" event={"ID":"377f1114-a9f8-4b98-96c9-71f827483095","Type":"ContainerDied","Data":"5422fbbe3d19b17b096844093f0983d8e4bb7e58a9c2ee1c92f25cc5f8b7acc4"} Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.514602 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5422fbbe3d19b17b096844093f0983d8e4bb7e58a9c2ee1c92f25cc5f8b7acc4" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.514302 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mdksr" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.619344 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5"] Jan 26 14:45:10 crc kubenswrapper[4922]: E0126 14:45:10.619934 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377f1114-a9f8-4b98-96c9-71f827483095" containerName="ssh-known-hosts-edpm-deployment" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.619952 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="377f1114-a9f8-4b98-96c9-71f827483095" containerName="ssh-known-hosts-edpm-deployment" Jan 26 14:45:10 crc kubenswrapper[4922]: E0126 14:45:10.619974 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" containerName="collect-profiles" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.619981 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" containerName="collect-profiles" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.620192 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" containerName="collect-profiles" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.620231 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="377f1114-a9f8-4b98-96c9-71f827483095" containerName="ssh-known-hosts-edpm-deployment" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.620892 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.623256 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.623562 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.623671 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.623581 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.636304 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5"] Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.702224 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.702291 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9wx\" (UniqueName: \"kubernetes.io/projected/d0238a10-7400-4e82-ab24-d9f30ee2b02d-kube-api-access-8g9wx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.702456 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.804176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.804238 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g9wx\" (UniqueName: \"kubernetes.io/projected/d0238a10-7400-4e82-ab24-d9f30ee2b02d-kube-api-access-8g9wx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.804369 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.808999 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.816699 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.823724 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g9wx\" (UniqueName: \"kubernetes.io/projected/d0238a10-7400-4e82-ab24-d9f30ee2b02d-kube-api-access-8g9wx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9pm5\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:10 crc kubenswrapper[4922]: I0126 14:45:10.938476 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:11 crc kubenswrapper[4922]: I0126 14:45:11.442203 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5"] Jan 26 14:45:11 crc kubenswrapper[4922]: I0126 14:45:11.522552 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" event={"ID":"d0238a10-7400-4e82-ab24-d9f30ee2b02d","Type":"ContainerStarted","Data":"57c65a57c01b502d34ecc259d402fda04f6da1d26fcd6e7121c6a1f41bece44a"} Jan 26 14:45:12 crc kubenswrapper[4922]: I0126 14:45:12.532684 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" event={"ID":"d0238a10-7400-4e82-ab24-d9f30ee2b02d","Type":"ContainerStarted","Data":"642aadc18b2a35bf53576f241073bb411ee88368ca9e4911c73b1a18e30cdd94"} Jan 26 14:45:16 crc kubenswrapper[4922]: I0126 14:45:16.600326 4922 scope.go:117] "RemoveContainer" containerID="bbd46f2939937102fa04da55818c48317510df63c3e07634aa05e9fb44dd5165" Jan 26 14:45:20 crc kubenswrapper[4922]: I0126 14:45:20.600174 4922 generic.go:334] "Generic (PLEG): container finished" podID="d0238a10-7400-4e82-ab24-d9f30ee2b02d" containerID="642aadc18b2a35bf53576f241073bb411ee88368ca9e4911c73b1a18e30cdd94" exitCode=0 Jan 26 14:45:20 crc kubenswrapper[4922]: I0126 14:45:20.600309 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" event={"ID":"d0238a10-7400-4e82-ab24-d9f30ee2b02d","Type":"ContainerDied","Data":"642aadc18b2a35bf53576f241073bb411ee88368ca9e4911c73b1a18e30cdd94"} Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.079839 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.152573 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-ssh-key-openstack-edpm-ipam\") pod \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.152670 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g9wx\" (UniqueName: \"kubernetes.io/projected/d0238a10-7400-4e82-ab24-d9f30ee2b02d-kube-api-access-8g9wx\") pod \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.152751 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-inventory\") pod \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\" (UID: \"d0238a10-7400-4e82-ab24-d9f30ee2b02d\") " Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.158809 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0238a10-7400-4e82-ab24-d9f30ee2b02d-kube-api-access-8g9wx" (OuterVolumeSpecName: "kube-api-access-8g9wx") pod "d0238a10-7400-4e82-ab24-d9f30ee2b02d" (UID: "d0238a10-7400-4e82-ab24-d9f30ee2b02d"). InnerVolumeSpecName "kube-api-access-8g9wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.195230 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d0238a10-7400-4e82-ab24-d9f30ee2b02d" (UID: "d0238a10-7400-4e82-ab24-d9f30ee2b02d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.196336 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-inventory" (OuterVolumeSpecName: "inventory") pod "d0238a10-7400-4e82-ab24-d9f30ee2b02d" (UID: "d0238a10-7400-4e82-ab24-d9f30ee2b02d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.255513 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.255720 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0238a10-7400-4e82-ab24-d9f30ee2b02d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.255784 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g9wx\" (UniqueName: \"kubernetes.io/projected/d0238a10-7400-4e82-ab24-d9f30ee2b02d-kube-api-access-8g9wx\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.624037 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" event={"ID":"d0238a10-7400-4e82-ab24-d9f30ee2b02d","Type":"ContainerDied","Data":"57c65a57c01b502d34ecc259d402fda04f6da1d26fcd6e7121c6a1f41bece44a"} Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.624655 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c65a57c01b502d34ecc259d402fda04f6da1d26fcd6e7121c6a1f41bece44a" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.624130 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9pm5" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.728849 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d"] Jan 26 14:45:22 crc kubenswrapper[4922]: E0126 14:45:22.729291 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0238a10-7400-4e82-ab24-d9f30ee2b02d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.729313 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0238a10-7400-4e82-ab24-d9f30ee2b02d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.729574 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0238a10-7400-4e82-ab24-d9f30ee2b02d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.730455 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.734797 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.735193 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.735565 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.735796 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.739263 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d"] Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.867352 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.867454 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpms\" (UniqueName: \"kubernetes.io/projected/e17ee626-9062-4bd3-8566-93e6160b89bc-kube-api-access-srpms\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.867567 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.970320 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpms\" (UniqueName: \"kubernetes.io/projected/e17ee626-9062-4bd3-8566-93e6160b89bc-kube-api-access-srpms\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.970498 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.970689 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.976522 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.976562 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:22 crc kubenswrapper[4922]: I0126 14:45:22.992791 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpms\" (UniqueName: \"kubernetes.io/projected/e17ee626-9062-4bd3-8566-93e6160b89bc-kube-api-access-srpms\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:23 crc kubenswrapper[4922]: I0126 14:45:23.057912 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:23 crc kubenswrapper[4922]: I0126 14:45:23.598685 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d"] Jan 26 14:45:23 crc kubenswrapper[4922]: W0126 14:45:23.602553 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17ee626_9062_4bd3_8566_93e6160b89bc.slice/crio-40ad40e72bf3e5bcfbee9198e60e8ee0273c7d348805b3c8348482b92b102311 WatchSource:0}: Error finding container 40ad40e72bf3e5bcfbee9198e60e8ee0273c7d348805b3c8348482b92b102311: Status 404 returned error can't find the container with id 40ad40e72bf3e5bcfbee9198e60e8ee0273c7d348805b3c8348482b92b102311 Jan 26 14:45:23 crc kubenswrapper[4922]: I0126 14:45:23.641383 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" event={"ID":"e17ee626-9062-4bd3-8566-93e6160b89bc","Type":"ContainerStarted","Data":"40ad40e72bf3e5bcfbee9198e60e8ee0273c7d348805b3c8348482b92b102311"} Jan 26 14:45:24 crc kubenswrapper[4922]: I0126 14:45:24.653855 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" event={"ID":"e17ee626-9062-4bd3-8566-93e6160b89bc","Type":"ContainerStarted","Data":"7e54fb6c595546c2cccdf76298c2eda4196a970d4ba9b559f885148f4ac321d6"} Jan 26 14:45:24 crc kubenswrapper[4922]: I0126 14:45:24.685742 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" podStartSLOduration=2.237547273 podStartE2EDuration="2.685714265s" podCreationTimestamp="2026-01-26 14:45:22 +0000 UTC" firstStartedPulling="2026-01-26 14:45:23.610797092 +0000 UTC m=+2140.813059874" lastFinishedPulling="2026-01-26 14:45:24.058964094 +0000 UTC m=+2141.261226866" observedRunningTime="2026-01-26 14:45:24.673642858 +0000 UTC m=+2141.875905650" watchObservedRunningTime="2026-01-26 14:45:24.685714265 +0000 UTC m=+2141.887977107" Jan 26 14:45:34 crc kubenswrapper[4922]: I0126 14:45:34.749027 4922 generic.go:334] "Generic (PLEG): container finished" podID="e17ee626-9062-4bd3-8566-93e6160b89bc" containerID="7e54fb6c595546c2cccdf76298c2eda4196a970d4ba9b559f885148f4ac321d6" exitCode=0 Jan 26 14:45:34 crc kubenswrapper[4922]: I0126 14:45:34.749142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" event={"ID":"e17ee626-9062-4bd3-8566-93e6160b89bc","Type":"ContainerDied","Data":"7e54fb6c595546c2cccdf76298c2eda4196a970d4ba9b559f885148f4ac321d6"} Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.202694 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.256540 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-ssh-key-openstack-edpm-ipam\") pod \"e17ee626-9062-4bd3-8566-93e6160b89bc\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.256845 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-inventory\") pod \"e17ee626-9062-4bd3-8566-93e6160b89bc\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.256872 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpms\" (UniqueName: \"kubernetes.io/projected/e17ee626-9062-4bd3-8566-93e6160b89bc-kube-api-access-srpms\") pod \"e17ee626-9062-4bd3-8566-93e6160b89bc\" (UID: \"e17ee626-9062-4bd3-8566-93e6160b89bc\") " Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.272613 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17ee626-9062-4bd3-8566-93e6160b89bc-kube-api-access-srpms" (OuterVolumeSpecName: "kube-api-access-srpms") pod "e17ee626-9062-4bd3-8566-93e6160b89bc" (UID: "e17ee626-9062-4bd3-8566-93e6160b89bc"). InnerVolumeSpecName "kube-api-access-srpms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.313185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-inventory" (OuterVolumeSpecName: "inventory") pod "e17ee626-9062-4bd3-8566-93e6160b89bc" (UID: "e17ee626-9062-4bd3-8566-93e6160b89bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.318916 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e17ee626-9062-4bd3-8566-93e6160b89bc" (UID: "e17ee626-9062-4bd3-8566-93e6160b89bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.360126 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.360157 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpms\" (UniqueName: \"kubernetes.io/projected/e17ee626-9062-4bd3-8566-93e6160b89bc-kube-api-access-srpms\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.360171 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e17ee626-9062-4bd3-8566-93e6160b89bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.778367 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" event={"ID":"e17ee626-9062-4bd3-8566-93e6160b89bc","Type":"ContainerDied","Data":"40ad40e72bf3e5bcfbee9198e60e8ee0273c7d348805b3c8348482b92b102311"} Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.778411 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ad40e72bf3e5bcfbee9198e60e8ee0273c7d348805b3c8348482b92b102311" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.778476 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.867651 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw"] Jan 26 14:45:36 crc kubenswrapper[4922]: E0126 14:45:36.868043 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17ee626-9062-4bd3-8566-93e6160b89bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.868061 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17ee626-9062-4bd3-8566-93e6160b89bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.868252 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17ee626-9062-4bd3-8566-93e6160b89bc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.870267 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.878672 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.878672 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.878725 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.878823 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.879056 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.879345 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.879560 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.896838 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.921202 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw"] Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.981931 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982063 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982162 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982197 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982225 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982262 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982297 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982333 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbdf\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-kube-api-access-ppbdf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982359 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982385 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982441 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982464 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:36 crc kubenswrapper[4922]: I0126 14:45:36.982514 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.084467 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.084785 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.084909 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.085011 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.085128 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.085650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.085760 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.085849 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.085935 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.086049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbdf\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-kube-api-access-ppbdf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.086171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.086263 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.086389 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.086465 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.089727 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.090079 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.090717 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.090754 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.091253 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.091462 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.091542 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.092417 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.092458 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.092840 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.093733 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.094270 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.095515 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.105230 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbdf\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-kube-api-access-ppbdf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.207894 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.592748 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw"] Jan 26 14:45:37 crc kubenswrapper[4922]: I0126 14:45:37.788316 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" event={"ID":"dfdf6694-c807-448c-beed-03053e451f2b","Type":"ContainerStarted","Data":"cb8f21a8ceaf0c33100662e1d373e6332028a7dec4069f326e372d8e6214ad37"} Jan 26 14:45:38 crc kubenswrapper[4922]: I0126 14:45:38.809293 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" event={"ID":"dfdf6694-c807-448c-beed-03053e451f2b","Type":"ContainerStarted","Data":"305c0d5c14f8718a0d02ceb3f048cda4c2244722864e2b1c4bae370b5194af3e"} Jan 26 14:45:38 crc kubenswrapper[4922]: I0126 14:45:38.842329 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" podStartSLOduration=2.09480489 podStartE2EDuration="2.842314513s" podCreationTimestamp="2026-01-26 14:45:36 +0000 UTC" firstStartedPulling="2026-01-26 14:45:37.613866281 +0000 UTC m=+2154.816129053" lastFinishedPulling="2026-01-26 14:45:38.361375904 +0000 UTC m=+2155.563638676" observedRunningTime="2026-01-26 14:45:38.838749927 +0000 UTC m=+2156.041012699" watchObservedRunningTime="2026-01-26 14:45:38.842314513 +0000 UTC m=+2156.044577285" Jan 26 14:46:11 crc kubenswrapper[4922]: I0126 14:46:11.307455 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:46:11 crc kubenswrapper[4922]: I0126 14:46:11.307972 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:46:18 crc kubenswrapper[4922]: I0126 14:46:18.158221 4922 generic.go:334] "Generic (PLEG): container finished" podID="dfdf6694-c807-448c-beed-03053e451f2b" containerID="305c0d5c14f8718a0d02ceb3f048cda4c2244722864e2b1c4bae370b5194af3e" exitCode=0 Jan 26 14:46:18 crc kubenswrapper[4922]: I0126 14:46:18.158386 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" event={"ID":"dfdf6694-c807-448c-beed-03053e451f2b","Type":"ContainerDied","Data":"305c0d5c14f8718a0d02ceb3f048cda4c2244722864e2b1c4bae370b5194af3e"} Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.618144 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705184 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-inventory\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705240 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-repo-setup-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705287 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-neutron-metadata-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705320 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-nova-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705360 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ovn-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705374 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-bootstrap-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705427 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-libvirt-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705502 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705522 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705560 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705602 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705645 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ssh-key-openstack-edpm-ipam\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705680 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-telemetry-combined-ca-bundle\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.705715 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbdf\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-kube-api-access-ppbdf\") pod \"dfdf6694-c807-448c-beed-03053e451f2b\" (UID: \"dfdf6694-c807-448c-beed-03053e451f2b\") " Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.711124 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.711183 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.717956 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.718011 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.718392 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.718489 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.718818 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.719545 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-kube-api-access-ppbdf" (OuterVolumeSpecName: "kube-api-access-ppbdf") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "kube-api-access-ppbdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.723714 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.724877 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.725985 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.729241 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.749371 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.750429 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-inventory" (OuterVolumeSpecName: "inventory") pod "dfdf6694-c807-448c-beed-03053e451f2b" (UID: "dfdf6694-c807-448c-beed-03053e451f2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807899 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807930 4922 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807941 4922 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807951 4922 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807963 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807971 4922 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807981 4922 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.807989 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.808000 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.808008 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.808017 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.808027 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.808037 4922 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdf6694-c807-448c-beed-03053e451f2b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:19 crc kubenswrapper[4922]: I0126 14:46:19.808046 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbdf\" (UniqueName: \"kubernetes.io/projected/dfdf6694-c807-448c-beed-03053e451f2b-kube-api-access-ppbdf\") on node \"crc\" DevicePath \"\"" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.181837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" event={"ID":"dfdf6694-c807-448c-beed-03053e451f2b","Type":"ContainerDied","Data":"cb8f21a8ceaf0c33100662e1d373e6332028a7dec4069f326e372d8e6214ad37"} Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.182480 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8f21a8ceaf0c33100662e1d373e6332028a7dec4069f326e372d8e6214ad37" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.181902 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.300586 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz"] Jan 26 14:46:20 crc kubenswrapper[4922]: E0126 14:46:20.301237 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdf6694-c807-448c-beed-03053e451f2b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.301266 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdf6694-c807-448c-beed-03053e451f2b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.301516 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdf6694-c807-448c-beed-03053e451f2b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.302407 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.315418 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz"] Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.343585 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.343917 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.344099 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.344242 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.344149 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.418216 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.418446 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.418649 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6f59\" (UniqueName: \"kubernetes.io/projected/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-kube-api-access-v6f59\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.418726 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.418835 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.520004 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.520147 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.520187 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.520230 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.520248 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6f59\" (UniqueName: \"kubernetes.io/projected/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-kube-api-access-v6f59\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.521447 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.524733 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.530669 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.530814 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.536613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6f59\" (UniqueName: \"kubernetes.io/projected/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-kube-api-access-v6f59\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wvsjz\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:20 crc kubenswrapper[4922]: I0126 14:46:20.662710 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:46:21 crc kubenswrapper[4922]: I0126 14:46:21.207812 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz"] Jan 26 14:46:22 crc kubenswrapper[4922]: I0126 14:46:22.220215 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" event={"ID":"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d","Type":"ContainerStarted","Data":"3ea18a5e84a6de2b6f2674ab988487b46bfe292bd3dc2a4923ea651ea61fc7dd"} Jan 26 14:46:22 crc kubenswrapper[4922]: I0126 14:46:22.220528 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" event={"ID":"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d","Type":"ContainerStarted","Data":"8c13cec349af28812e2e714dae38caabf7f7d07ce8bbd41914ccd4f99f29f90b"} Jan 26 14:46:22 crc kubenswrapper[4922]: I0126 14:46:22.238825 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" podStartSLOduration=1.773441707 podStartE2EDuration="2.238805805s" podCreationTimestamp="2026-01-26 14:46:20 +0000 UTC" firstStartedPulling="2026-01-26 14:46:21.208992775 +0000 UTC m=+2198.411255547" lastFinishedPulling="2026-01-26 14:46:21.674356833 +0000 UTC m=+2198.876619645" observedRunningTime="2026-01-26 14:46:22.235411013 +0000 UTC m=+2199.437673785" watchObservedRunningTime="2026-01-26 14:46:22.238805805 +0000 UTC m=+2199.441068597" Jan 26 14:46:41 crc kubenswrapper[4922]: I0126 14:46:41.306687 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:46:41 crc kubenswrapper[4922]: I0126 14:46:41.308042 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.306955 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.307921 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.308009 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.309412 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.309485 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" gracePeriod=600 Jan 26 14:47:11 crc kubenswrapper[4922]: E0126 14:47:11.430302 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.698599 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" exitCode=0 Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.698645 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2"} Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.698676 4922 scope.go:117] "RemoveContainer" containerID="fd36364959c3d9ab20a2aac447f7d4fb3bff085c6a9e6c63789643890d6297ba" Jan 26 14:47:11 crc kubenswrapper[4922]: I0126 14:47:11.699424 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:47:11 crc kubenswrapper[4922]: E0126 14:47:11.699822 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:47:22 crc kubenswrapper[4922]: I0126 14:47:22.092831 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:47:22 crc kubenswrapper[4922]: E0126 14:47:22.093794 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:47:32 crc kubenswrapper[4922]: I0126 14:47:32.925540 4922 generic.go:334] "Generic (PLEG): container finished" podID="d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" containerID="3ea18a5e84a6de2b6f2674ab988487b46bfe292bd3dc2a4923ea651ea61fc7dd" exitCode=0 Jan 26 14:47:32 crc kubenswrapper[4922]: I0126 14:47:32.925674 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" event={"ID":"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d","Type":"ContainerDied","Data":"3ea18a5e84a6de2b6f2674ab988487b46bfe292bd3dc2a4923ea651ea61fc7dd"} Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.357879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.509211 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovn-combined-ca-bundle\") pod \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.509546 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ssh-key-openstack-edpm-ipam\") pod \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.509618 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-inventory\") pod \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.509765 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6f59\" (UniqueName: \"kubernetes.io/projected/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-kube-api-access-v6f59\") pod \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.509784 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovncontroller-config-0\") pod \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\" (UID: \"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d\") " Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.514520 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-kube-api-access-v6f59" (OuterVolumeSpecName: "kube-api-access-v6f59") pod "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" (UID: "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d"). InnerVolumeSpecName "kube-api-access-v6f59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.514941 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" (UID: "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.549990 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" (UID: "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.550473 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-inventory" (OuterVolumeSpecName: "inventory") pod "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" (UID: "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.554545 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" (UID: "d6ff5d49-d748-41c2-9893-b3cd1fd09b2d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.612152 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.612217 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.612244 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.612265 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6f59\" (UniqueName: \"kubernetes.io/projected/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-kube-api-access-v6f59\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.612287 4922 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6ff5d49-d748-41c2-9893-b3cd1fd09b2d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.944895 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" event={"ID":"d6ff5d49-d748-41c2-9893-b3cd1fd09b2d","Type":"ContainerDied","Data":"8c13cec349af28812e2e714dae38caabf7f7d07ce8bbd41914ccd4f99f29f90b"} Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.944974 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c13cec349af28812e2e714dae38caabf7f7d07ce8bbd41914ccd4f99f29f90b" Jan 26 14:47:34 crc kubenswrapper[4922]: I0126 14:47:34.944936 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wvsjz" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.053419 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2"] Jan 26 14:47:35 crc kubenswrapper[4922]: E0126 14:47:35.053908 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.053928 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.054218 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ff5d49-d748-41c2-9893-b3cd1fd09b2d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.055099 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.057883 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.057973 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.058118 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.058291 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.058348 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.058519 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.062817 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2"] Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.224262 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.224321 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.224352 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzbmd\" (UniqueName: \"kubernetes.io/projected/dd09600e-1e19-4a04-8e03-12312a20e513-kube-api-access-nzbmd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.225167 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.225333 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.225755 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.327178 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.327250 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.327309 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.327371 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.327397 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzbmd\" (UniqueName: \"kubernetes.io/projected/dd09600e-1e19-4a04-8e03-12312a20e513-kube-api-access-nzbmd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.327563 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.340907 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.341640 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.342255 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.342267 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.342765 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.346687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzbmd\" (UniqueName: \"kubernetes.io/projected/dd09600e-1e19-4a04-8e03-12312a20e513-kube-api-access-nzbmd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.377415 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.976333 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2"] Jan 26 14:47:35 crc kubenswrapper[4922]: I0126 14:47:35.982482 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:47:36 crc kubenswrapper[4922]: I0126 14:47:36.093499 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:47:36 crc kubenswrapper[4922]: E0126 14:47:36.093712 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:47:36 crc kubenswrapper[4922]: I0126 14:47:36.965981 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" event={"ID":"dd09600e-1e19-4a04-8e03-12312a20e513","Type":"ContainerStarted","Data":"584487fe06f14de7fdb55e970b6d9de91a59759417d618b4db9ef235174b40d6"} Jan 26 14:47:36 crc kubenswrapper[4922]: I0126 14:47:36.966358 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" event={"ID":"dd09600e-1e19-4a04-8e03-12312a20e513","Type":"ContainerStarted","Data":"852718f9ad76979ad99b73c298b97af37ed502115dea856557bf626571a3a50a"} Jan 26 14:47:36 crc kubenswrapper[4922]: I0126 14:47:36.993956 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" podStartSLOduration=1.290684975 podStartE2EDuration="1.993925698s" podCreationTimestamp="2026-01-26 14:47:35 +0000 UTC" firstStartedPulling="2026-01-26 14:47:35.982177067 +0000 UTC m=+2273.184439839" lastFinishedPulling="2026-01-26 14:47:36.68541778 +0000 UTC m=+2273.887680562" observedRunningTime="2026-01-26 14:47:36.983743382 +0000 UTC m=+2274.186006164" watchObservedRunningTime="2026-01-26 14:47:36.993925698 +0000 UTC m=+2274.196188510" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.348290 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fh8ll"] Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.351217 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.374317 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fh8ll"] Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.470902 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtxz8\" (UniqueName: \"kubernetes.io/projected/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-kube-api-access-wtxz8\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.471192 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-utilities\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.471274 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-catalog-content\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.573872 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-utilities\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.573954 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-catalog-content\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.574083 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtxz8\" (UniqueName: \"kubernetes.io/projected/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-kube-api-access-wtxz8\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.574674 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-utilities\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.574674 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-catalog-content\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.610490 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtxz8\" (UniqueName: \"kubernetes.io/projected/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-kube-api-access-wtxz8\") pod \"community-operators-fh8ll\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:41 crc kubenswrapper[4922]: I0126 14:47:41.672827 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:42 crc kubenswrapper[4922]: I0126 14:47:42.199654 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fh8ll"] Jan 26 14:47:43 crc kubenswrapper[4922]: I0126 14:47:43.030320 4922 generic.go:334] "Generic (PLEG): container finished" podID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerID="6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977" exitCode=0 Jan 26 14:47:43 crc kubenswrapper[4922]: I0126 14:47:43.030379 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerDied","Data":"6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977"} Jan 26 14:47:43 crc kubenswrapper[4922]: I0126 14:47:43.030414 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerStarted","Data":"635640e128d010b5ae2ba67b94e1aefc033f7a7a862c55cf98783d4b9dba2311"} Jan 26 14:47:44 crc kubenswrapper[4922]: I0126 14:47:44.044652 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerStarted","Data":"2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2"} Jan 26 14:47:45 crc kubenswrapper[4922]: I0126 14:47:45.058421 4922 generic.go:334] "Generic (PLEG): container finished" podID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerID="2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2" exitCode=0 Jan 26 14:47:45 crc kubenswrapper[4922]: I0126 14:47:45.058530 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerDied","Data":"2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2"} Jan 26 14:47:46 crc kubenswrapper[4922]: I0126 14:47:46.079825 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerStarted","Data":"586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30"} Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.791343 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fh8ll" podStartSLOduration=5.330707128 podStartE2EDuration="7.791321882s" podCreationTimestamp="2026-01-26 14:47:41 +0000 UTC" firstStartedPulling="2026-01-26 14:47:43.032588833 +0000 UTC m=+2280.234851605" lastFinishedPulling="2026-01-26 14:47:45.493203547 +0000 UTC m=+2282.695466359" observedRunningTime="2026-01-26 14:47:46.100763967 +0000 UTC m=+2283.303026739" watchObservedRunningTime="2026-01-26 14:47:48.791321882 +0000 UTC m=+2285.993584654" Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.793009 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tfj8r"] Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.795655 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.808627 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tfj8r"] Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.920778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-utilities\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.920873 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrss6\" (UniqueName: \"kubernetes.io/projected/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-kube-api-access-xrss6\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:48 crc kubenswrapper[4922]: I0126 14:47:48.921443 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-catalog-content\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.023590 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrss6\" (UniqueName: \"kubernetes.io/projected/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-kube-api-access-xrss6\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.023722 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-catalog-content\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.023886 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-utilities\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.024343 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-catalog-content\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.024371 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-utilities\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.056383 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrss6\" (UniqueName: \"kubernetes.io/projected/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-kube-api-access-xrss6\") pod \"certified-operators-tfj8r\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.116770 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:49 crc kubenswrapper[4922]: I0126 14:47:49.623413 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tfj8r"] Jan 26 14:47:49 crc kubenswrapper[4922]: W0126 14:47:49.626276 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd89cfd7d_50a6_40ce_875c_4442dad4bc4a.slice/crio-9d38b53591e34da7a6b694c866efc9d3f5adc5167a2c42dd20ae0b3fd0d269de WatchSource:0}: Error finding container 9d38b53591e34da7a6b694c866efc9d3f5adc5167a2c42dd20ae0b3fd0d269de: Status 404 returned error can't find the container with id 9d38b53591e34da7a6b694c866efc9d3f5adc5167a2c42dd20ae0b3fd0d269de Jan 26 14:47:50 crc kubenswrapper[4922]: I0126 14:47:50.122554 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerStarted","Data":"c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222"} Jan 26 14:47:50 crc kubenswrapper[4922]: I0126 14:47:50.123642 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerStarted","Data":"9d38b53591e34da7a6b694c866efc9d3f5adc5167a2c42dd20ae0b3fd0d269de"} Jan 26 14:47:51 crc kubenswrapper[4922]: I0126 14:47:51.093367 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:47:51 crc kubenswrapper[4922]: E0126 14:47:51.094020 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:47:51 crc kubenswrapper[4922]: I0126 14:47:51.136679 4922 generic.go:334] "Generic (PLEG): container finished" podID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerID="c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222" exitCode=0 Jan 26 14:47:51 crc kubenswrapper[4922]: I0126 14:47:51.136735 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerDied","Data":"c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222"} Jan 26 14:47:51 crc kubenswrapper[4922]: I0126 14:47:51.673119 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:51 crc kubenswrapper[4922]: I0126 14:47:51.673171 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:51 crc kubenswrapper[4922]: I0126 14:47:51.722502 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:52 crc kubenswrapper[4922]: I0126 14:47:52.209351 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:53 crc kubenswrapper[4922]: I0126 14:47:53.161539 4922 generic.go:334] "Generic (PLEG): container finished" podID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerID="7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5" exitCode=0 Jan 26 14:47:53 crc kubenswrapper[4922]: I0126 14:47:53.161640 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerDied","Data":"7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5"} Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.106876 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fh8ll"] Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.171410 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerStarted","Data":"21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e"} Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.171550 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fh8ll" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="registry-server" containerID="cri-o://586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30" gracePeriod=2 Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.189455 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tfj8r" podStartSLOduration=3.669452559 podStartE2EDuration="6.189432901s" podCreationTimestamp="2026-01-26 14:47:48 +0000 UTC" firstStartedPulling="2026-01-26 14:47:51.139342046 +0000 UTC m=+2288.341604858" lastFinishedPulling="2026-01-26 14:47:53.659322428 +0000 UTC m=+2290.861585200" observedRunningTime="2026-01-26 14:47:54.187454497 +0000 UTC m=+2291.389717279" watchObservedRunningTime="2026-01-26 14:47:54.189432901 +0000 UTC m=+2291.391695683" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.729695 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.855992 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-utilities\") pod \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.856174 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-catalog-content\") pod \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.856293 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtxz8\" (UniqueName: \"kubernetes.io/projected/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-kube-api-access-wtxz8\") pod \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\" (UID: \"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8\") " Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.858700 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-utilities" (OuterVolumeSpecName: "utilities") pod "ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" (UID: "ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.861939 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-kube-api-access-wtxz8" (OuterVolumeSpecName: "kube-api-access-wtxz8") pod "ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" (UID: "ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8"). InnerVolumeSpecName "kube-api-access-wtxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.913635 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" (UID: "ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.958756 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.958809 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:54 crc kubenswrapper[4922]: I0126 14:47:54.958826 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtxz8\" (UniqueName: \"kubernetes.io/projected/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8-kube-api-access-wtxz8\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.184702 4922 generic.go:334] "Generic (PLEG): container finished" podID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerID="586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30" exitCode=0 Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.185056 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8ll" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.185736 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerDied","Data":"586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30"} Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.185791 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8ll" event={"ID":"ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8","Type":"ContainerDied","Data":"635640e128d010b5ae2ba67b94e1aefc033f7a7a862c55cf98783d4b9dba2311"} Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.185815 4922 scope.go:117] "RemoveContainer" containerID="586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.209038 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fh8ll"] Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.214225 4922 scope.go:117] "RemoveContainer" containerID="2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.220031 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fh8ll"] Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.235016 4922 scope.go:117] "RemoveContainer" containerID="6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.302675 4922 scope.go:117] "RemoveContainer" containerID="586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30" Jan 26 14:47:55 crc kubenswrapper[4922]: E0126 14:47:55.305986 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30\": container with ID starting with 586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30 not found: ID does not exist" containerID="586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.306049 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30"} err="failed to get container status \"586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30\": rpc error: code = NotFound desc = could not find container \"586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30\": container with ID starting with 586deba3bd96c3a84b82a5e353952e430eed6fff841cbc39d3620c12d744ce30 not found: ID does not exist" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.306104 4922 scope.go:117] "RemoveContainer" containerID="2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2" Jan 26 14:47:55 crc kubenswrapper[4922]: E0126 14:47:55.306466 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2\": container with ID starting with 2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2 not found: ID does not exist" containerID="2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.306494 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2"} err="failed to get container status \"2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2\": rpc error: code = NotFound desc = could not find container \"2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2\": container with ID starting with 2dd6d6745418f59f721c27b1911e80815696e437cda8f74b090d5304504e57f2 not found: ID does not exist" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.306512 4922 scope.go:117] "RemoveContainer" containerID="6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977" Jan 26 14:47:55 crc kubenswrapper[4922]: E0126 14:47:55.306775 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977\": container with ID starting with 6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977 not found: ID does not exist" containerID="6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977" Jan 26 14:47:55 crc kubenswrapper[4922]: I0126 14:47:55.306802 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977"} err="failed to get container status \"6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977\": rpc error: code = NotFound desc = could not find container \"6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977\": container with ID starting with 6173a5e758aeab4e04d07e24d5a80b4872e31116797dbd72311e37cc0a586977 not found: ID does not exist" Jan 26 14:47:57 crc kubenswrapper[4922]: I0126 14:47:57.102888 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" path="/var/lib/kubelet/pods/ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8/volumes" Jan 26 14:47:59 crc kubenswrapper[4922]: I0126 14:47:59.117430 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:59 crc kubenswrapper[4922]: I0126 14:47:59.117804 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:59 crc kubenswrapper[4922]: I0126 14:47:59.165745 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:47:59 crc kubenswrapper[4922]: I0126 14:47:59.271818 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:48:00 crc kubenswrapper[4922]: I0126 14:48:00.306089 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tfj8r"] Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.238769 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tfj8r" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="registry-server" containerID="cri-o://21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e" gracePeriod=2 Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.745754 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.802663 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrss6\" (UniqueName: \"kubernetes.io/projected/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-kube-api-access-xrss6\") pod \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.802798 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-catalog-content\") pod \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.802866 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-utilities\") pod \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\" (UID: \"d89cfd7d-50a6-40ce-875c-4442dad4bc4a\") " Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.803878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-utilities" (OuterVolumeSpecName: "utilities") pod "d89cfd7d-50a6-40ce-875c-4442dad4bc4a" (UID: "d89cfd7d-50a6-40ce-875c-4442dad4bc4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.808768 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-kube-api-access-xrss6" (OuterVolumeSpecName: "kube-api-access-xrss6") pod "d89cfd7d-50a6-40ce-875c-4442dad4bc4a" (UID: "d89cfd7d-50a6-40ce-875c-4442dad4bc4a"). InnerVolumeSpecName "kube-api-access-xrss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.871678 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d89cfd7d-50a6-40ce-875c-4442dad4bc4a" (UID: "d89cfd7d-50a6-40ce-875c-4442dad4bc4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.905879 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrss6\" (UniqueName: \"kubernetes.io/projected/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-kube-api-access-xrss6\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.905921 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:01 crc kubenswrapper[4922]: I0126 14:48:01.905930 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89cfd7d-50a6-40ce-875c-4442dad4bc4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.255239 4922 generic.go:334] "Generic (PLEG): container finished" podID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerID="21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e" exitCode=0 Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.255309 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerDied","Data":"21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e"} Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.255322 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfj8r" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.255354 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfj8r" event={"ID":"d89cfd7d-50a6-40ce-875c-4442dad4bc4a","Type":"ContainerDied","Data":"9d38b53591e34da7a6b694c866efc9d3f5adc5167a2c42dd20ae0b3fd0d269de"} Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.255380 4922 scope.go:117] "RemoveContainer" containerID="21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.292081 4922 scope.go:117] "RemoveContainer" containerID="7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.293247 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tfj8r"] Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.317127 4922 scope.go:117] "RemoveContainer" containerID="c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.322155 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tfj8r"] Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.366557 4922 scope.go:117] "RemoveContainer" containerID="21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e" Jan 26 14:48:02 crc kubenswrapper[4922]: E0126 14:48:02.367175 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e\": container with ID starting with 21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e not found: ID does not exist" containerID="21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.367208 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e"} err="failed to get container status \"21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e\": rpc error: code = NotFound desc = could not find container \"21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e\": container with ID starting with 21d2616935013da9c909ce9182636cc4a768b0f1e87726e330bfaf65d6efb85e not found: ID does not exist" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.367227 4922 scope.go:117] "RemoveContainer" containerID="7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5" Jan 26 14:48:02 crc kubenswrapper[4922]: E0126 14:48:02.367679 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5\": container with ID starting with 7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5 not found: ID does not exist" containerID="7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.367737 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5"} err="failed to get container status \"7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5\": rpc error: code = NotFound desc = could not find container \"7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5\": container with ID starting with 7b9c1ad796d98cd30e02b6488243d0176fa20aca93495eaa2d607d39579533d5 not found: ID does not exist" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.367769 4922 scope.go:117] "RemoveContainer" containerID="c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222" Jan 26 14:48:02 crc kubenswrapper[4922]: E0126 14:48:02.368200 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222\": container with ID starting with c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222 not found: ID does not exist" containerID="c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222" Jan 26 14:48:02 crc kubenswrapper[4922]: I0126 14:48:02.368233 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222"} err="failed to get container status \"c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222\": rpc error: code = NotFound desc = could not find container \"c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222\": container with ID starting with c35941798245c8922f03f5a289320119c0e8e89e9a0fa3b836a03ddbda5ad222 not found: ID does not exist" Jan 26 14:48:03 crc kubenswrapper[4922]: I0126 14:48:03.105089 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" path="/var/lib/kubelet/pods/d89cfd7d-50a6-40ce-875c-4442dad4bc4a/volumes" Jan 26 14:48:05 crc kubenswrapper[4922]: I0126 14:48:05.093105 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:48:05 crc kubenswrapper[4922]: E0126 14:48:05.093425 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:48:16 crc kubenswrapper[4922]: I0126 14:48:16.093926 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:48:16 crc kubenswrapper[4922]: E0126 14:48:16.094839 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:48:30 crc kubenswrapper[4922]: E0126 14:48:30.834352 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd09600e_1e19_4a04_8e03_12312a20e513.slice/crio-584487fe06f14de7fdb55e970b6d9de91a59759417d618b4db9ef235174b40d6.scope\": RecentStats: unable to find data in memory cache]" Jan 26 14:48:31 crc kubenswrapper[4922]: I0126 14:48:31.093181 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:48:31 crc kubenswrapper[4922]: E0126 14:48:31.093470 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:48:31 crc kubenswrapper[4922]: I0126 14:48:31.514602 4922 generic.go:334] "Generic (PLEG): container finished" podID="dd09600e-1e19-4a04-8e03-12312a20e513" containerID="584487fe06f14de7fdb55e970b6d9de91a59759417d618b4db9ef235174b40d6" exitCode=0 Jan 26 14:48:31 crc kubenswrapper[4922]: I0126 14:48:31.514645 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" event={"ID":"dd09600e-1e19-4a04-8e03-12312a20e513","Type":"ContainerDied","Data":"584487fe06f14de7fdb55e970b6d9de91a59759417d618b4db9ef235174b40d6"} Jan 26 14:48:32 crc kubenswrapper[4922]: I0126 14:48:32.938089 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.122877 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzbmd\" (UniqueName: \"kubernetes.io/projected/dd09600e-1e19-4a04-8e03-12312a20e513-kube-api-access-nzbmd\") pod \"dd09600e-1e19-4a04-8e03-12312a20e513\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.122978 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-ssh-key-openstack-edpm-ipam\") pod \"dd09600e-1e19-4a04-8e03-12312a20e513\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.123152 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-nova-metadata-neutron-config-0\") pod \"dd09600e-1e19-4a04-8e03-12312a20e513\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.123250 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-ovn-metadata-agent-neutron-config-0\") pod \"dd09600e-1e19-4a04-8e03-12312a20e513\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.123279 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-metadata-combined-ca-bundle\") pod \"dd09600e-1e19-4a04-8e03-12312a20e513\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.123376 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-inventory\") pod \"dd09600e-1e19-4a04-8e03-12312a20e513\" (UID: \"dd09600e-1e19-4a04-8e03-12312a20e513\") " Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.129167 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dd09600e-1e19-4a04-8e03-12312a20e513" (UID: "dd09600e-1e19-4a04-8e03-12312a20e513"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.135282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd09600e-1e19-4a04-8e03-12312a20e513-kube-api-access-nzbmd" (OuterVolumeSpecName: "kube-api-access-nzbmd") pod "dd09600e-1e19-4a04-8e03-12312a20e513" (UID: "dd09600e-1e19-4a04-8e03-12312a20e513"). InnerVolumeSpecName "kube-api-access-nzbmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.160343 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd09600e-1e19-4a04-8e03-12312a20e513" (UID: "dd09600e-1e19-4a04-8e03-12312a20e513"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.162597 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "dd09600e-1e19-4a04-8e03-12312a20e513" (UID: "dd09600e-1e19-4a04-8e03-12312a20e513"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.163914 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-inventory" (OuterVolumeSpecName: "inventory") pod "dd09600e-1e19-4a04-8e03-12312a20e513" (UID: "dd09600e-1e19-4a04-8e03-12312a20e513"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.164040 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "dd09600e-1e19-4a04-8e03-12312a20e513" (UID: "dd09600e-1e19-4a04-8e03-12312a20e513"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.226563 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzbmd\" (UniqueName: \"kubernetes.io/projected/dd09600e-1e19-4a04-8e03-12312a20e513-kube-api-access-nzbmd\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.226606 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.226619 4922 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.226634 4922 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.226649 4922 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.226663 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd09600e-1e19-4a04-8e03-12312a20e513-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.534244 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" event={"ID":"dd09600e-1e19-4a04-8e03-12312a20e513","Type":"ContainerDied","Data":"852718f9ad76979ad99b73c298b97af37ed502115dea856557bf626571a3a50a"} Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.534301 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852718f9ad76979ad99b73c298b97af37ed502115dea856557bf626571a3a50a" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.534901 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.715939 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4"] Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716320 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="extract-utilities" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716337 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="extract-utilities" Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716351 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="extract-content" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716360 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="extract-content" Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716375 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="extract-content" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716381 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="extract-content" Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716400 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd09600e-1e19-4a04-8e03-12312a20e513" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716408 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd09600e-1e19-4a04-8e03-12312a20e513" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716424 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="extract-utilities" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716429 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="extract-utilities" Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716446 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="registry-server" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716451 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="registry-server" Jan 26 14:48:33 crc kubenswrapper[4922]: E0126 14:48:33.716460 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="registry-server" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716465 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="registry-server" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716639 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9a8788-fb6b-4fb8-bcd3-2fbb2bf840e8" containerName="registry-server" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716653 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89cfd7d-50a6-40ce-875c-4442dad4bc4a" containerName="registry-server" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.716666 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd09600e-1e19-4a04-8e03-12312a20e513" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.717316 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.719331 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.719474 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.719573 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.719668 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.720398 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.734629 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4"] Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.836995 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.837049 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.837126 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rpl8\" (UniqueName: \"kubernetes.io/projected/eb0a3861-3e56-4795-a6b3-48870bdf183a-kube-api-access-9rpl8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.837160 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.837588 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.939158 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.939506 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.939570 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rpl8\" (UniqueName: \"kubernetes.io/projected/eb0a3861-3e56-4795-a6b3-48870bdf183a-kube-api-access-9rpl8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.939601 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.939696 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.943841 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.943851 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.944912 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.945142 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:33 crc kubenswrapper[4922]: I0126 14:48:33.958882 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rpl8\" (UniqueName: \"kubernetes.io/projected/eb0a3861-3e56-4795-a6b3-48870bdf183a-kube-api-access-9rpl8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:34 crc kubenswrapper[4922]: I0126 14:48:34.039863 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:48:34 crc kubenswrapper[4922]: I0126 14:48:34.588913 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4"] Jan 26 14:48:35 crc kubenswrapper[4922]: I0126 14:48:35.556805 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" event={"ID":"eb0a3861-3e56-4795-a6b3-48870bdf183a","Type":"ContainerStarted","Data":"2be846934316194b2ef6503d63db6a07b1841a5575075d6a19f72eaca1bbc23e"} Jan 26 14:48:35 crc kubenswrapper[4922]: I0126 14:48:35.557349 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" event={"ID":"eb0a3861-3e56-4795-a6b3-48870bdf183a","Type":"ContainerStarted","Data":"6f1351f5b5e1614532fe91ddb86b6d47c23fa11db183ee52b931879467fccd96"} Jan 26 14:48:35 crc kubenswrapper[4922]: I0126 14:48:35.582493 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" podStartSLOduration=1.964611076 podStartE2EDuration="2.582473165s" podCreationTimestamp="2026-01-26 14:48:33 +0000 UTC" firstStartedPulling="2026-01-26 14:48:34.602142085 +0000 UTC m=+2331.804404867" lastFinishedPulling="2026-01-26 14:48:35.220004184 +0000 UTC m=+2332.422266956" observedRunningTime="2026-01-26 14:48:35.575050533 +0000 UTC m=+2332.777313305" watchObservedRunningTime="2026-01-26 14:48:35.582473165 +0000 UTC m=+2332.784735957" Jan 26 14:48:45 crc kubenswrapper[4922]: I0126 14:48:45.093304 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:48:45 crc kubenswrapper[4922]: E0126 14:48:45.094274 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:48:57 crc kubenswrapper[4922]: I0126 14:48:57.093133 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:48:57 crc kubenswrapper[4922]: E0126 14:48:57.093856 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:49:08 crc kubenswrapper[4922]: I0126 14:49:08.094440 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:49:08 crc kubenswrapper[4922]: E0126 14:49:08.095719 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:49:22 crc kubenswrapper[4922]: I0126 14:49:22.092405 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:49:22 crc kubenswrapper[4922]: E0126 14:49:22.093137 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:49:37 crc kubenswrapper[4922]: I0126 14:49:37.092370 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:49:37 crc kubenswrapper[4922]: E0126 14:49:37.093028 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:49:52 crc kubenswrapper[4922]: I0126 14:49:52.093037 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:49:52 crc kubenswrapper[4922]: E0126 14:49:52.093859 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:50:05 crc kubenswrapper[4922]: I0126 14:50:05.092556 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:50:05 crc kubenswrapper[4922]: E0126 14:50:05.093462 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:50:16 crc kubenswrapper[4922]: I0126 14:50:16.092669 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:50:16 crc kubenswrapper[4922]: E0126 14:50:16.093502 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:50:30 crc kubenswrapper[4922]: I0126 14:50:30.092262 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:50:30 crc kubenswrapper[4922]: E0126 14:50:30.093175 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:50:43 crc kubenswrapper[4922]: I0126 14:50:43.103264 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:50:43 crc kubenswrapper[4922]: E0126 14:50:43.105783 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:50:58 crc kubenswrapper[4922]: I0126 14:50:58.093635 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:50:58 crc kubenswrapper[4922]: E0126 14:50:58.094470 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:51:12 crc kubenswrapper[4922]: I0126 14:51:12.093646 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:51:12 crc kubenswrapper[4922]: E0126 14:51:12.094467 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:51:16 crc kubenswrapper[4922]: I0126 14:51:16.779765 4922 scope.go:117] "RemoveContainer" containerID="74dc5e40d83c9ec1ddbd794d9285bf3d1b59c315f510ded39164cfa2fb435e5a" Jan 26 14:51:16 crc kubenswrapper[4922]: I0126 14:51:16.810424 4922 scope.go:117] "RemoveContainer" containerID="6131608d6ed35e5ff5c8bcd6332015837a354723c098643667623c0dbd717cdc" Jan 26 14:51:16 crc kubenswrapper[4922]: I0126 14:51:16.845926 4922 scope.go:117] "RemoveContainer" containerID="6eb82925e095dc359f90de3fa9e23d7866d15b88e829ef824387937378528eaf" Jan 26 14:51:25 crc kubenswrapper[4922]: I0126 14:51:25.092956 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:51:25 crc kubenswrapper[4922]: E0126 14:51:25.094222 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:51:36 crc kubenswrapper[4922]: I0126 14:51:36.093257 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:51:36 crc kubenswrapper[4922]: E0126 14:51:36.094004 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:51:49 crc kubenswrapper[4922]: I0126 14:51:49.093253 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:51:49 crc kubenswrapper[4922]: E0126 14:51:49.094216 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:52:02 crc kubenswrapper[4922]: I0126 14:52:02.093230 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:52:02 crc kubenswrapper[4922]: E0126 14:52:02.094452 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:52:16 crc kubenswrapper[4922]: I0126 14:52:16.093240 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:52:16 crc kubenswrapper[4922]: I0126 14:52:16.734968 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"d62b2e57f90f24414a8311e1d8233d1064718948cc3f25eeb4b23b97fa4decc8"} Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.620113 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzzs2"] Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.624586 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.639458 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzzs2"] Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.758832 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mqn\" (UniqueName: \"kubernetes.io/projected/7374b589-e045-4657-ba4f-b9c2ee39796c-kube-api-access-55mqn\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.758928 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-utilities\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.759017 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-catalog-content\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.860855 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-catalog-content\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.860985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55mqn\" (UniqueName: \"kubernetes.io/projected/7374b589-e045-4657-ba4f-b9c2ee39796c-kube-api-access-55mqn\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.861114 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-utilities\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.861439 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-catalog-content\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.861673 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-utilities\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.883538 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55mqn\" (UniqueName: \"kubernetes.io/projected/7374b589-e045-4657-ba4f-b9c2ee39796c-kube-api-access-55mqn\") pod \"redhat-operators-fzzs2\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:38 crc kubenswrapper[4922]: I0126 14:52:38.953575 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:39 crc kubenswrapper[4922]: I0126 14:52:39.460192 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzzs2"] Jan 26 14:52:39 crc kubenswrapper[4922]: I0126 14:52:39.960192 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerStarted","Data":"c90df6943a5e3101a74d6681cb3285df8eab96bf4efbf19245678fcfce2ac5dd"} Jan 26 14:52:41 crc kubenswrapper[4922]: I0126 14:52:41.982019 4922 generic.go:334] "Generic (PLEG): container finished" podID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerID="56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf" exitCode=0 Jan 26 14:52:41 crc kubenswrapper[4922]: I0126 14:52:41.982112 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerDied","Data":"56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf"} Jan 26 14:52:41 crc kubenswrapper[4922]: I0126 14:52:41.986980 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:52:42 crc kubenswrapper[4922]: I0126 14:52:42.995847 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerStarted","Data":"6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d"} Jan 26 14:52:46 crc kubenswrapper[4922]: I0126 14:52:46.020522 4922 generic.go:334] "Generic (PLEG): container finished" podID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerID="6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d" exitCode=0 Jan 26 14:52:46 crc kubenswrapper[4922]: I0126 14:52:46.020563 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerDied","Data":"6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d"} Jan 26 14:52:48 crc kubenswrapper[4922]: I0126 14:52:48.044941 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerStarted","Data":"4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940"} Jan 26 14:52:48 crc kubenswrapper[4922]: I0126 14:52:48.069118 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzzs2" podStartSLOduration=4.563102743 podStartE2EDuration="10.069097066s" podCreationTimestamp="2026-01-26 14:52:38 +0000 UTC" firstStartedPulling="2026-01-26 14:52:41.986717501 +0000 UTC m=+2579.188980273" lastFinishedPulling="2026-01-26 14:52:47.492711824 +0000 UTC m=+2584.694974596" observedRunningTime="2026-01-26 14:52:48.062636782 +0000 UTC m=+2585.264899584" watchObservedRunningTime="2026-01-26 14:52:48.069097066 +0000 UTC m=+2585.271359838" Jan 26 14:52:48 crc kubenswrapper[4922]: I0126 14:52:48.954036 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:48 crc kubenswrapper[4922]: I0126 14:52:48.954124 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:50 crc kubenswrapper[4922]: I0126 14:52:50.006352 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzzs2" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="registry-server" probeResult="failure" output=< Jan 26 14:52:50 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 14:52:50 crc kubenswrapper[4922]: > Jan 26 14:52:59 crc kubenswrapper[4922]: I0126 14:52:59.030621 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:59 crc kubenswrapper[4922]: I0126 14:52:59.108024 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:52:59 crc kubenswrapper[4922]: I0126 14:52:59.277326 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzzs2"] Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.180463 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzzs2" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="registry-server" containerID="cri-o://4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940" gracePeriod=2 Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.655113 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.846241 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-utilities\") pod \"7374b589-e045-4657-ba4f-b9c2ee39796c\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.846423 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55mqn\" (UniqueName: \"kubernetes.io/projected/7374b589-e045-4657-ba4f-b9c2ee39796c-kube-api-access-55mqn\") pod \"7374b589-e045-4657-ba4f-b9c2ee39796c\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.846491 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-utilities" (OuterVolumeSpecName: "utilities") pod "7374b589-e045-4657-ba4f-b9c2ee39796c" (UID: "7374b589-e045-4657-ba4f-b9c2ee39796c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.847519 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-catalog-content\") pod \"7374b589-e045-4657-ba4f-b9c2ee39796c\" (UID: \"7374b589-e045-4657-ba4f-b9c2ee39796c\") " Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.848250 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.852895 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7374b589-e045-4657-ba4f-b9c2ee39796c-kube-api-access-55mqn" (OuterVolumeSpecName: "kube-api-access-55mqn") pod "7374b589-e045-4657-ba4f-b9c2ee39796c" (UID: "7374b589-e045-4657-ba4f-b9c2ee39796c"). InnerVolumeSpecName "kube-api-access-55mqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.949425 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55mqn\" (UniqueName: \"kubernetes.io/projected/7374b589-e045-4657-ba4f-b9c2ee39796c-kube-api-access-55mqn\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:00 crc kubenswrapper[4922]: I0126 14:53:00.975547 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7374b589-e045-4657-ba4f-b9c2ee39796c" (UID: "7374b589-e045-4657-ba4f-b9c2ee39796c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.051085 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7374b589-e045-4657-ba4f-b9c2ee39796c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.190409 4922 generic.go:334] "Generic (PLEG): container finished" podID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerID="4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940" exitCode=0 Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.190618 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerDied","Data":"4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940"} Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.190689 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzzs2" event={"ID":"7374b589-e045-4657-ba4f-b9c2ee39796c","Type":"ContainerDied","Data":"c90df6943a5e3101a74d6681cb3285df8eab96bf4efbf19245678fcfce2ac5dd"} Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.190710 4922 scope.go:117] "RemoveContainer" containerID="4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.191773 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzzs2" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.216784 4922 scope.go:117] "RemoveContainer" containerID="6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.217619 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzzs2"] Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.226149 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzzs2"] Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.247123 4922 scope.go:117] "RemoveContainer" containerID="56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.305135 4922 scope.go:117] "RemoveContainer" containerID="4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940" Jan 26 14:53:01 crc kubenswrapper[4922]: E0126 14:53:01.305849 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940\": container with ID starting with 4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940 not found: ID does not exist" containerID="4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.305896 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940"} err="failed to get container status \"4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940\": rpc error: code = NotFound desc = could not find container \"4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940\": container with ID starting with 4935cb2599805b8d87241d8fe705c2d94af1eca586abb1d8052ec363adb3c940 not found: ID does not exist" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.305928 4922 scope.go:117] "RemoveContainer" containerID="6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d" Jan 26 14:53:01 crc kubenswrapper[4922]: E0126 14:53:01.306336 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d\": container with ID starting with 6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d not found: ID does not exist" containerID="6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.306384 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d"} err="failed to get container status \"6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d\": rpc error: code = NotFound desc = could not find container \"6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d\": container with ID starting with 6dfacefb43bd3331e00365b93d811c62da2895e6ab4f72b8eebc45400076105d not found: ID does not exist" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.306412 4922 scope.go:117] "RemoveContainer" containerID="56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf" Jan 26 14:53:01 crc kubenswrapper[4922]: E0126 14:53:01.306704 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf\": container with ID starting with 56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf not found: ID does not exist" containerID="56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf" Jan 26 14:53:01 crc kubenswrapper[4922]: I0126 14:53:01.306724 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf"} err="failed to get container status \"56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf\": rpc error: code = NotFound desc = could not find container \"56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf\": container with ID starting with 56c062f1be289c37f24dc3b02f36920ec85ba671dd86e69d1db898436c6d52cf not found: ID does not exist" Jan 26 14:53:03 crc kubenswrapper[4922]: I0126 14:53:03.103148 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" path="/var/lib/kubelet/pods/7374b589-e045-4657-ba4f-b9c2ee39796c/volumes" Jan 26 14:53:05 crc kubenswrapper[4922]: I0126 14:53:05.230314 4922 generic.go:334] "Generic (PLEG): container finished" podID="eb0a3861-3e56-4795-a6b3-48870bdf183a" containerID="2be846934316194b2ef6503d63db6a07b1841a5575075d6a19f72eaca1bbc23e" exitCode=0 Jan 26 14:53:05 crc kubenswrapper[4922]: I0126 14:53:05.230593 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" event={"ID":"eb0a3861-3e56-4795-a6b3-48870bdf183a","Type":"ContainerDied","Data":"2be846934316194b2ef6503d63db6a07b1841a5575075d6a19f72eaca1bbc23e"} Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.653536 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.673839 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-ssh-key-openstack-edpm-ipam\") pod \"eb0a3861-3e56-4795-a6b3-48870bdf183a\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.673938 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-combined-ca-bundle\") pod \"eb0a3861-3e56-4795-a6b3-48870bdf183a\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.673989 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-inventory\") pod \"eb0a3861-3e56-4795-a6b3-48870bdf183a\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.674052 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-secret-0\") pod \"eb0a3861-3e56-4795-a6b3-48870bdf183a\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.674197 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rpl8\" (UniqueName: \"kubernetes.io/projected/eb0a3861-3e56-4795-a6b3-48870bdf183a-kube-api-access-9rpl8\") pod \"eb0a3861-3e56-4795-a6b3-48870bdf183a\" (UID: \"eb0a3861-3e56-4795-a6b3-48870bdf183a\") " Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.682301 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0a3861-3e56-4795-a6b3-48870bdf183a-kube-api-access-9rpl8" (OuterVolumeSpecName: "kube-api-access-9rpl8") pod "eb0a3861-3e56-4795-a6b3-48870bdf183a" (UID: "eb0a3861-3e56-4795-a6b3-48870bdf183a"). InnerVolumeSpecName "kube-api-access-9rpl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.692988 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "eb0a3861-3e56-4795-a6b3-48870bdf183a" (UID: "eb0a3861-3e56-4795-a6b3-48870bdf183a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.712559 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb0a3861-3e56-4795-a6b3-48870bdf183a" (UID: "eb0a3861-3e56-4795-a6b3-48870bdf183a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.716147 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "eb0a3861-3e56-4795-a6b3-48870bdf183a" (UID: "eb0a3861-3e56-4795-a6b3-48870bdf183a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.716640 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-inventory" (OuterVolumeSpecName: "inventory") pod "eb0a3861-3e56-4795-a6b3-48870bdf183a" (UID: "eb0a3861-3e56-4795-a6b3-48870bdf183a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.779752 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rpl8\" (UniqueName: \"kubernetes.io/projected/eb0a3861-3e56-4795-a6b3-48870bdf183a-kube-api-access-9rpl8\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.779792 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.779805 4922 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.779827 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4922]: I0126 14:53:06.779840 4922 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/eb0a3861-3e56-4795-a6b3-48870bdf183a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.251981 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" event={"ID":"eb0a3861-3e56-4795-a6b3-48870bdf183a","Type":"ContainerDied","Data":"6f1351f5b5e1614532fe91ddb86b6d47c23fa11db183ee52b931879467fccd96"} Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.254373 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1351f5b5e1614532fe91ddb86b6d47c23fa11db183ee52b931879467fccd96" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.254541 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338174 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk"] Jan 26 14:53:07 crc kubenswrapper[4922]: E0126 14:53:07.338578 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="extract-content" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338594 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="extract-content" Jan 26 14:53:07 crc kubenswrapper[4922]: E0126 14:53:07.338607 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="extract-utilities" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338614 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="extract-utilities" Jan 26 14:53:07 crc kubenswrapper[4922]: E0126 14:53:07.338624 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="registry-server" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338630 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="registry-server" Jan 26 14:53:07 crc kubenswrapper[4922]: E0126 14:53:07.338660 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0a3861-3e56-4795-a6b3-48870bdf183a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338668 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0a3861-3e56-4795-a6b3-48870bdf183a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338860 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7374b589-e045-4657-ba4f-b9c2ee39796c" containerName="registry-server" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.338880 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0a3861-3e56-4795-a6b3-48870bdf183a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.339579 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.344806 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.350236 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.350287 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.350383 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.350643 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.350713 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.350759 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.354256 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk"] Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390608 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg95t\" (UniqueName: \"kubernetes.io/projected/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-kube-api-access-gg95t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390664 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390739 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390760 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390784 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390807 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390865 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.390912 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.492853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.492912 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg95t\" (UniqueName: \"kubernetes.io/projected/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-kube-api-access-gg95t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.492947 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.492980 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.493024 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.493049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.493109 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.493151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.493235 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.495503 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.498757 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.499087 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.499213 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.500116 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.500908 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.507972 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.510535 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg95t\" (UniqueName: \"kubernetes.io/projected/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-kube-api-access-gg95t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.514225 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-kljvk\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:07 crc kubenswrapper[4922]: I0126 14:53:07.657646 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:53:08 crc kubenswrapper[4922]: I0126 14:53:08.217187 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk"] Jan 26 14:53:08 crc kubenswrapper[4922]: I0126 14:53:08.263210 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" event={"ID":"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b","Type":"ContainerStarted","Data":"324fe75e26e6a004e4e92ff33ddbf165786642d34f55a2f10bbd40e8899369a1"} Jan 26 14:53:09 crc kubenswrapper[4922]: I0126 14:53:09.273088 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" event={"ID":"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b","Type":"ContainerStarted","Data":"d9d1b84d550d73778f42d2014785dd8ce919d90a62174abc55bed5a7927ec661"} Jan 26 14:54:41 crc kubenswrapper[4922]: I0126 14:54:41.306770 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:54:41 crc kubenswrapper[4922]: I0126 14:54:41.307576 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:55:11 crc kubenswrapper[4922]: I0126 14:55:11.306910 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:55:11 crc kubenswrapper[4922]: I0126 14:55:11.307479 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.306802 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.307248 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.307287 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.307997 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d62b2e57f90f24414a8311e1d8233d1064718948cc3f25eeb4b23b97fa4decc8"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.308051 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://d62b2e57f90f24414a8311e1d8233d1064718948cc3f25eeb4b23b97fa4decc8" gracePeriod=600 Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.721608 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="d62b2e57f90f24414a8311e1d8233d1064718948cc3f25eeb4b23b97fa4decc8" exitCode=0 Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.721673 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"d62b2e57f90f24414a8311e1d8233d1064718948cc3f25eeb4b23b97fa4decc8"} Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.721956 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3"} Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.721988 4922 scope.go:117] "RemoveContainer" containerID="45d063fef908cf4a7b6abbf9debb411f83f16a1315d528efa1f6dd6a15308bb2" Jan 26 14:55:41 crc kubenswrapper[4922]: I0126 14:55:41.748646 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" podStartSLOduration=154.113817344 podStartE2EDuration="2m34.748622762s" podCreationTimestamp="2026-01-26 14:53:07 +0000 UTC" firstStartedPulling="2026-01-26 14:53:08.221966801 +0000 UTC m=+2605.424229573" lastFinishedPulling="2026-01-26 14:53:08.856772219 +0000 UTC m=+2606.059034991" observedRunningTime="2026-01-26 14:53:09.304028919 +0000 UTC m=+2606.506291781" watchObservedRunningTime="2026-01-26 14:55:41.748622762 +0000 UTC m=+2758.950885534" Jan 26 14:55:51 crc kubenswrapper[4922]: I0126 14:55:51.820200 4922 generic.go:334] "Generic (PLEG): container finished" podID="91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" containerID="d9d1b84d550d73778f42d2014785dd8ce919d90a62174abc55bed5a7927ec661" exitCode=0 Jan 26 14:55:51 crc kubenswrapper[4922]: I0126 14:55:51.820466 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" event={"ID":"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b","Type":"ContainerDied","Data":"d9d1b84d550d73778f42d2014785dd8ce919d90a62174abc55bed5a7927ec661"} Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.353677 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.426509 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-1\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.426684 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-0\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.426785 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-0\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.426812 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-combined-ca-bundle\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.427002 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-ssh-key-openstack-edpm-ipam\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.427080 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-extra-config-0\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.427171 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-1\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.427272 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-inventory\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.427431 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg95t\" (UniqueName: \"kubernetes.io/projected/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-kube-api-access-gg95t\") pod \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\" (UID: \"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b\") " Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.461553 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-kube-api-access-gg95t" (OuterVolumeSpecName: "kube-api-access-gg95t") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "kube-api-access-gg95t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.463804 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.470470 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.472877 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.475192 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.475632 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.486618 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.492954 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.516220 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-inventory" (OuterVolumeSpecName: "inventory") pod "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" (UID: "91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534522 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg95t\" (UniqueName: \"kubernetes.io/projected/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-kube-api-access-gg95t\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534566 4922 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534579 4922 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534591 4922 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534603 4922 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534616 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534629 4922 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534639 4922 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.534650 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.847486 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" event={"ID":"91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b","Type":"ContainerDied","Data":"324fe75e26e6a004e4e92ff33ddbf165786642d34f55a2f10bbd40e8899369a1"} Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.847762 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324fe75e26e6a004e4e92ff33ddbf165786642d34f55a2f10bbd40e8899369a1" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.847763 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-kljvk" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.973713 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh"] Jan 26 14:55:53 crc kubenswrapper[4922]: E0126 14:55:53.974209 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.974228 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.974492 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.975350 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.981054 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.981171 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.981268 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.981274 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fr242" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.981301 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 14:55:53 crc kubenswrapper[4922]: I0126 14:55:53.995206 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh"] Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.146091 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.146190 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.146582 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqbx\" (UniqueName: \"kubernetes.io/projected/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-kube-api-access-dzqbx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.146846 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.147010 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.147109 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.147143 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.249450 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.249615 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.249864 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqbx\" (UniqueName: \"kubernetes.io/projected/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-kube-api-access-dzqbx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.249977 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.250056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.250140 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.250175 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.254496 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.254559 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.254932 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.256744 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.257312 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.258896 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.278992 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqbx\" (UniqueName: \"kubernetes.io/projected/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-kube-api-access-dzqbx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8nddh\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.297619 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.366189 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7t2c7"] Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.370356 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.412433 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t2c7"] Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.454116 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-catalog-content\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.454204 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-utilities\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.454238 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zx4f\" (UniqueName: \"kubernetes.io/projected/5dec0b81-d66d-4203-b4b3-2da1d544ae04-kube-api-access-2zx4f\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.555922 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-utilities\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.556022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zx4f\" (UniqueName: \"kubernetes.io/projected/5dec0b81-d66d-4203-b4b3-2da1d544ae04-kube-api-access-2zx4f\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.556534 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-utilities\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.556650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-catalog-content\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.557098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-catalog-content\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.594040 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zx4f\" (UniqueName: \"kubernetes.io/projected/5dec0b81-d66d-4203-b4b3-2da1d544ae04-kube-api-access-2zx4f\") pod \"redhat-marketplace-7t2c7\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.749166 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:55:54 crc kubenswrapper[4922]: I0126 14:55:54.910459 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh"] Jan 26 14:55:54 crc kubenswrapper[4922]: W0126 14:55:54.916752 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eaec89f_007e_4ecf_a60f_f9f6729dfe13.slice/crio-c829c9e6d50ea14ca8cd23dfa60f257d0380f2120e99393e71034418ccc7f416 WatchSource:0}: Error finding container c829c9e6d50ea14ca8cd23dfa60f257d0380f2120e99393e71034418ccc7f416: Status 404 returned error can't find the container with id c829c9e6d50ea14ca8cd23dfa60f257d0380f2120e99393e71034418ccc7f416 Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.299038 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t2c7"] Jan 26 14:55:55 crc kubenswrapper[4922]: W0126 14:55:55.310507 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dec0b81_d66d_4203_b4b3_2da1d544ae04.slice/crio-8919be9647e2fab484d63e7ac3d80beb385d15120ebee3d33b603701aa9c868a WatchSource:0}: Error finding container 8919be9647e2fab484d63e7ac3d80beb385d15120ebee3d33b603701aa9c868a: Status 404 returned error can't find the container with id 8919be9647e2fab484d63e7ac3d80beb385d15120ebee3d33b603701aa9c868a Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.870052 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" event={"ID":"4eaec89f-007e-4ecf-a60f-f9f6729dfe13","Type":"ContainerStarted","Data":"9059bdc8cd55e1e28c870322845bf6a3e395e74d39d4e47fe03e76097fa2e27d"} Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.870212 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" event={"ID":"4eaec89f-007e-4ecf-a60f-f9f6729dfe13","Type":"ContainerStarted","Data":"c829c9e6d50ea14ca8cd23dfa60f257d0380f2120e99393e71034418ccc7f416"} Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.872744 4922 generic.go:334] "Generic (PLEG): container finished" podID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerID="6a7d854dec1242c980285c63dc4920b850aef7591e5f1d7717750572834bb903" exitCode=0 Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.872795 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t2c7" event={"ID":"5dec0b81-d66d-4203-b4b3-2da1d544ae04","Type":"ContainerDied","Data":"6a7d854dec1242c980285c63dc4920b850aef7591e5f1d7717750572834bb903"} Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.872833 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t2c7" event={"ID":"5dec0b81-d66d-4203-b4b3-2da1d544ae04","Type":"ContainerStarted","Data":"8919be9647e2fab484d63e7ac3d80beb385d15120ebee3d33b603701aa9c868a"} Jan 26 14:55:55 crc kubenswrapper[4922]: I0126 14:55:55.889733 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" podStartSLOduration=2.2067372929999998 podStartE2EDuration="2.889717873s" podCreationTimestamp="2026-01-26 14:55:53 +0000 UTC" firstStartedPulling="2026-01-26 14:55:54.919692519 +0000 UTC m=+2772.121955291" lastFinishedPulling="2026-01-26 14:55:55.602673099 +0000 UTC m=+2772.804935871" observedRunningTime="2026-01-26 14:55:55.887695 +0000 UTC m=+2773.089957762" watchObservedRunningTime="2026-01-26 14:55:55.889717873 +0000 UTC m=+2773.091980645" Jan 26 14:55:56 crc kubenswrapper[4922]: I0126 14:55:56.882832 4922 generic.go:334] "Generic (PLEG): container finished" podID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerID="98936527afb8baf51ee83673528703bbbfd82496dd53681d2feffd52ec751cbf" exitCode=0 Jan 26 14:55:56 crc kubenswrapper[4922]: I0126 14:55:56.882955 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t2c7" event={"ID":"5dec0b81-d66d-4203-b4b3-2da1d544ae04","Type":"ContainerDied","Data":"98936527afb8baf51ee83673528703bbbfd82496dd53681d2feffd52ec751cbf"} Jan 26 14:55:57 crc kubenswrapper[4922]: I0126 14:55:57.896742 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t2c7" event={"ID":"5dec0b81-d66d-4203-b4b3-2da1d544ae04","Type":"ContainerStarted","Data":"28679bae308233b9e598476977aca64c40e271fa292e5577198cb00e981a2556"} Jan 26 14:55:57 crc kubenswrapper[4922]: I0126 14:55:57.919779 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7t2c7" podStartSLOduration=2.320514163 podStartE2EDuration="3.919763645s" podCreationTimestamp="2026-01-26 14:55:54 +0000 UTC" firstStartedPulling="2026-01-26 14:55:55.874356852 +0000 UTC m=+2773.076619624" lastFinishedPulling="2026-01-26 14:55:57.473606334 +0000 UTC m=+2774.675869106" observedRunningTime="2026-01-26 14:55:57.913319432 +0000 UTC m=+2775.115582214" watchObservedRunningTime="2026-01-26 14:55:57.919763645 +0000 UTC m=+2775.122026417" Jan 26 14:56:04 crc kubenswrapper[4922]: I0126 14:56:04.749418 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:56:04 crc kubenswrapper[4922]: I0126 14:56:04.749891 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:56:04 crc kubenswrapper[4922]: I0126 14:56:04.793737 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:56:05 crc kubenswrapper[4922]: I0126 14:56:05.014326 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:56:05 crc kubenswrapper[4922]: I0126 14:56:05.055858 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t2c7"] Jan 26 14:56:06 crc kubenswrapper[4922]: I0126 14:56:06.984215 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7t2c7" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="registry-server" containerID="cri-o://28679bae308233b9e598476977aca64c40e271fa292e5577198cb00e981a2556" gracePeriod=2 Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.013948 4922 generic.go:334] "Generic (PLEG): container finished" podID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerID="28679bae308233b9e598476977aca64c40e271fa292e5577198cb00e981a2556" exitCode=0 Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.014323 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t2c7" event={"ID":"5dec0b81-d66d-4203-b4b3-2da1d544ae04","Type":"ContainerDied","Data":"28679bae308233b9e598476977aca64c40e271fa292e5577198cb00e981a2556"} Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.568532 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.703095 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-utilities\") pod \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.703282 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-catalog-content\") pod \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.703361 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zx4f\" (UniqueName: \"kubernetes.io/projected/5dec0b81-d66d-4203-b4b3-2da1d544ae04-kube-api-access-2zx4f\") pod \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\" (UID: \"5dec0b81-d66d-4203-b4b3-2da1d544ae04\") " Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.703921 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-utilities" (OuterVolumeSpecName: "utilities") pod "5dec0b81-d66d-4203-b4b3-2da1d544ae04" (UID: "5dec0b81-d66d-4203-b4b3-2da1d544ae04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.704935 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.712267 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dec0b81-d66d-4203-b4b3-2da1d544ae04-kube-api-access-2zx4f" (OuterVolumeSpecName: "kube-api-access-2zx4f") pod "5dec0b81-d66d-4203-b4b3-2da1d544ae04" (UID: "5dec0b81-d66d-4203-b4b3-2da1d544ae04"). InnerVolumeSpecName "kube-api-access-2zx4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.724947 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dec0b81-d66d-4203-b4b3-2da1d544ae04" (UID: "5dec0b81-d66d-4203-b4b3-2da1d544ae04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.806640 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec0b81-d66d-4203-b4b3-2da1d544ae04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:56:09 crc kubenswrapper[4922]: I0126 14:56:09.806954 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zx4f\" (UniqueName: \"kubernetes.io/projected/5dec0b81-d66d-4203-b4b3-2da1d544ae04-kube-api-access-2zx4f\") on node \"crc\" DevicePath \"\"" Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.029681 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t2c7" event={"ID":"5dec0b81-d66d-4203-b4b3-2da1d544ae04","Type":"ContainerDied","Data":"8919be9647e2fab484d63e7ac3d80beb385d15120ebee3d33b603701aa9c868a"} Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.029752 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t2c7" Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.030423 4922 scope.go:117] "RemoveContainer" containerID="28679bae308233b9e598476977aca64c40e271fa292e5577198cb00e981a2556" Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.054920 4922 scope.go:117] "RemoveContainer" containerID="98936527afb8baf51ee83673528703bbbfd82496dd53681d2feffd52ec751cbf" Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.064012 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t2c7"] Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.072252 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t2c7"] Jan 26 14:56:10 crc kubenswrapper[4922]: I0126 14:56:10.102913 4922 scope.go:117] "RemoveContainer" containerID="6a7d854dec1242c980285c63dc4920b850aef7591e5f1d7717750572834bb903" Jan 26 14:56:11 crc kubenswrapper[4922]: I0126 14:56:11.105741 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" path="/var/lib/kubelet/pods/5dec0b81-d66d-4203-b4b3-2da1d544ae04/volumes" Jan 26 14:57:14 crc kubenswrapper[4922]: I0126 14:57:14.411546 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-69b95496c5-qvg59" podUID="a2bcb723-e3e3-41f8-9704-10a1f8e78bd7" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 14:57:41 crc kubenswrapper[4922]: I0126 14:57:41.307731 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:57:41 crc kubenswrapper[4922]: I0126 14:57:41.308369 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:58:11 crc kubenswrapper[4922]: I0126 14:58:11.306805 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:58:11 crc kubenswrapper[4922]: I0126 14:58:11.307336 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.105435 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5ctzt"] Jan 26 14:58:13 crc kubenswrapper[4922]: E0126 14:58:13.106127 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="registry-server" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.106140 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="registry-server" Jan 26 14:58:13 crc kubenswrapper[4922]: E0126 14:58:13.106159 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="extract-utilities" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.106165 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="extract-utilities" Jan 26 14:58:13 crc kubenswrapper[4922]: E0126 14:58:13.106179 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="extract-content" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.106185 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="extract-content" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.106377 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dec0b81-d66d-4203-b4b3-2da1d544ae04" containerName="registry-server" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.107761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.131222 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ctzt"] Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.257148 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhj8\" (UniqueName: \"kubernetes.io/projected/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-kube-api-access-qfhj8\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.257215 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-utilities\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.257281 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-catalog-content\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.358692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhj8\" (UniqueName: \"kubernetes.io/projected/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-kube-api-access-qfhj8\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.358764 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-utilities\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.358840 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-catalog-content\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.359327 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-utilities\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.359433 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-catalog-content\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.380846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhj8\" (UniqueName: \"kubernetes.io/projected/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-kube-api-access-qfhj8\") pod \"certified-operators-5ctzt\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:13 crc kubenswrapper[4922]: I0126 14:58:13.429614 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:14 crc kubenswrapper[4922]: W0126 14:58:14.006018 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ff2ec9_a1ce_4332_9cf6_36be7893125b.slice/crio-4db720f345c8ad7f9646e74b7946884d130f527b429b34028762449438865b5a WatchSource:0}: Error finding container 4db720f345c8ad7f9646e74b7946884d130f527b429b34028762449438865b5a: Status 404 returned error can't find the container with id 4db720f345c8ad7f9646e74b7946884d130f527b429b34028762449438865b5a Jan 26 14:58:14 crc kubenswrapper[4922]: I0126 14:58:14.011393 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ctzt"] Jan 26 14:58:14 crc kubenswrapper[4922]: I0126 14:58:14.372236 4922 generic.go:334] "Generic (PLEG): container finished" podID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerID="5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672" exitCode=0 Jan 26 14:58:14 crc kubenswrapper[4922]: I0126 14:58:14.372355 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ctzt" event={"ID":"c0ff2ec9-a1ce-4332-9cf6-36be7893125b","Type":"ContainerDied","Data":"5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672"} Jan 26 14:58:14 crc kubenswrapper[4922]: I0126 14:58:14.372485 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ctzt" event={"ID":"c0ff2ec9-a1ce-4332-9cf6-36be7893125b","Type":"ContainerStarted","Data":"4db720f345c8ad7f9646e74b7946884d130f527b429b34028762449438865b5a"} Jan 26 14:58:15 crc kubenswrapper[4922]: I0126 14:58:15.383661 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:58:18 crc kubenswrapper[4922]: I0126 14:58:18.421811 4922 generic.go:334] "Generic (PLEG): container finished" podID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerID="9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb" exitCode=0 Jan 26 14:58:18 crc kubenswrapper[4922]: I0126 14:58:18.421890 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ctzt" event={"ID":"c0ff2ec9-a1ce-4332-9cf6-36be7893125b","Type":"ContainerDied","Data":"9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb"} Jan 26 14:58:22 crc kubenswrapper[4922]: I0126 14:58:22.464941 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ctzt" event={"ID":"c0ff2ec9-a1ce-4332-9cf6-36be7893125b","Type":"ContainerStarted","Data":"47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478"} Jan 26 14:58:22 crc kubenswrapper[4922]: I0126 14:58:22.482975 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5ctzt" podStartSLOduration=3.367517929 podStartE2EDuration="9.482953603s" podCreationTimestamp="2026-01-26 14:58:13 +0000 UTC" firstStartedPulling="2026-01-26 14:58:15.383398729 +0000 UTC m=+2912.585661511" lastFinishedPulling="2026-01-26 14:58:21.498834403 +0000 UTC m=+2918.701097185" observedRunningTime="2026-01-26 14:58:22.479988943 +0000 UTC m=+2919.682251715" watchObservedRunningTime="2026-01-26 14:58:22.482953603 +0000 UTC m=+2919.685216375" Jan 26 14:58:23 crc kubenswrapper[4922]: I0126 14:58:23.430646 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:23 crc kubenswrapper[4922]: I0126 14:58:23.430890 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:23 crc kubenswrapper[4922]: I0126 14:58:23.482316 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:29 crc kubenswrapper[4922]: I0126 14:58:29.531530 4922 generic.go:334] "Generic (PLEG): container finished" podID="4eaec89f-007e-4ecf-a60f-f9f6729dfe13" containerID="9059bdc8cd55e1e28c870322845bf6a3e395e74d39d4e47fe03e76097fa2e27d" exitCode=0 Jan 26 14:58:29 crc kubenswrapper[4922]: I0126 14:58:29.531628 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" event={"ID":"4eaec89f-007e-4ecf-a60f-f9f6729dfe13","Type":"ContainerDied","Data":"9059bdc8cd55e1e28c870322845bf6a3e395e74d39d4e47fe03e76097fa2e27d"} Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.026154 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072235 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-telemetry-combined-ca-bundle\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072275 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ssh-key-openstack-edpm-ipam\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072309 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-1\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072329 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-inventory\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072421 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqbx\" (UniqueName: \"kubernetes.io/projected/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-kube-api-access-dzqbx\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072502 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-0\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.072519 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-2\") pod \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\" (UID: \"4eaec89f-007e-4ecf-a60f-f9f6729dfe13\") " Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.079121 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.080372 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-kube-api-access-dzqbx" (OuterVolumeSpecName: "kube-api-access-dzqbx") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "kube-api-access-dzqbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.106958 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.121136 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.122822 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.123324 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-inventory" (OuterVolumeSpecName: "inventory") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.129861 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4eaec89f-007e-4ecf-a60f-f9f6729dfe13" (UID: "4eaec89f-007e-4ecf-a60f-f9f6729dfe13"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176481 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176511 4922 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176521 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176530 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176540 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176552 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzqbx\" (UniqueName: \"kubernetes.io/projected/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-kube-api-access-dzqbx\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.176563 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4eaec89f-007e-4ecf-a60f-f9f6729dfe13-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.552279 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" event={"ID":"4eaec89f-007e-4ecf-a60f-f9f6729dfe13","Type":"ContainerDied","Data":"c829c9e6d50ea14ca8cd23dfa60f257d0380f2120e99393e71034418ccc7f416"} Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.552361 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c829c9e6d50ea14ca8cd23dfa60f257d0380f2120e99393e71034418ccc7f416" Jan 26 14:58:31 crc kubenswrapper[4922]: I0126 14:58:31.552363 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8nddh" Jan 26 14:58:33 crc kubenswrapper[4922]: I0126 14:58:33.498613 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:33 crc kubenswrapper[4922]: I0126 14:58:33.562856 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ctzt"] Jan 26 14:58:33 crc kubenswrapper[4922]: I0126 14:58:33.568812 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5ctzt" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="registry-server" containerID="cri-o://47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478" gracePeriod=2 Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.096795 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.241181 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-utilities\") pod \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.241229 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfhj8\" (UniqueName: \"kubernetes.io/projected/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-kube-api-access-qfhj8\") pod \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.241489 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-catalog-content\") pod \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\" (UID: \"c0ff2ec9-a1ce-4332-9cf6-36be7893125b\") " Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.242199 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-utilities" (OuterVolumeSpecName: "utilities") pod "c0ff2ec9-a1ce-4332-9cf6-36be7893125b" (UID: "c0ff2ec9-a1ce-4332-9cf6-36be7893125b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.247230 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-kube-api-access-qfhj8" (OuterVolumeSpecName: "kube-api-access-qfhj8") pod "c0ff2ec9-a1ce-4332-9cf6-36be7893125b" (UID: "c0ff2ec9-a1ce-4332-9cf6-36be7893125b"). InnerVolumeSpecName "kube-api-access-qfhj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.296157 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0ff2ec9-a1ce-4332-9cf6-36be7893125b" (UID: "c0ff2ec9-a1ce-4332-9cf6-36be7893125b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.343560 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.343593 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.343605 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfhj8\" (UniqueName: \"kubernetes.io/projected/c0ff2ec9-a1ce-4332-9cf6-36be7893125b-kube-api-access-qfhj8\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.584491 4922 generic.go:334] "Generic (PLEG): container finished" podID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerID="47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478" exitCode=0 Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.584543 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ctzt" event={"ID":"c0ff2ec9-a1ce-4332-9cf6-36be7893125b","Type":"ContainerDied","Data":"47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478"} Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.584577 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ctzt" event={"ID":"c0ff2ec9-a1ce-4332-9cf6-36be7893125b","Type":"ContainerDied","Data":"4db720f345c8ad7f9646e74b7946884d130f527b429b34028762449438865b5a"} Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.584602 4922 scope.go:117] "RemoveContainer" containerID="47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.584811 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ctzt" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.623740 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ctzt"] Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.624490 4922 scope.go:117] "RemoveContainer" containerID="9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.633690 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5ctzt"] Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.662681 4922 scope.go:117] "RemoveContainer" containerID="5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.708372 4922 scope.go:117] "RemoveContainer" containerID="47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478" Jan 26 14:58:34 crc kubenswrapper[4922]: E0126 14:58:34.709214 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478\": container with ID starting with 47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478 not found: ID does not exist" containerID="47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.709250 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478"} err="failed to get container status \"47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478\": rpc error: code = NotFound desc = could not find container \"47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478\": container with ID starting with 47f3e078aff729917aabd637066686180c1869efa0345c4b98e6985b1b201478 not found: ID does not exist" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.709274 4922 scope.go:117] "RemoveContainer" containerID="9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb" Jan 26 14:58:34 crc kubenswrapper[4922]: E0126 14:58:34.709688 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb\": container with ID starting with 9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb not found: ID does not exist" containerID="9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.709709 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb"} err="failed to get container status \"9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb\": rpc error: code = NotFound desc = could not find container \"9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb\": container with ID starting with 9e1ad37f7c1bedbefe556297941e0d3836bde0bd9d486643049a2cbdc4ea50eb not found: ID does not exist" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.709721 4922 scope.go:117] "RemoveContainer" containerID="5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672" Jan 26 14:58:34 crc kubenswrapper[4922]: E0126 14:58:34.710169 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672\": container with ID starting with 5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672 not found: ID does not exist" containerID="5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672" Jan 26 14:58:34 crc kubenswrapper[4922]: I0126 14:58:34.710217 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672"} err="failed to get container status \"5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672\": rpc error: code = NotFound desc = could not find container \"5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672\": container with ID starting with 5878afb4bfa79b8a80438801a8f4aa198531cd3dfb4bb2f3d6c6713814ac6672 not found: ID does not exist" Jan 26 14:58:35 crc kubenswrapper[4922]: I0126 14:58:35.104008 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" path="/var/lib/kubelet/pods/c0ff2ec9-a1ce-4332-9cf6-36be7893125b/volumes" Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.306790 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.307383 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.307431 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.308266 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.308324 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" gracePeriod=600 Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.653932 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" exitCode=0 Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.654011 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3"} Jan 26 14:58:41 crc kubenswrapper[4922]: I0126 14:58:41.654110 4922 scope.go:117] "RemoveContainer" containerID="d62b2e57f90f24414a8311e1d8233d1064718948cc3f25eeb4b23b97fa4decc8" Jan 26 14:58:41 crc kubenswrapper[4922]: E0126 14:58:41.970995 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:58:42 crc kubenswrapper[4922]: I0126 14:58:42.666261 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 14:58:42 crc kubenswrapper[4922]: E0126 14:58:42.666932 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:58:57 crc kubenswrapper[4922]: I0126 14:58:57.092171 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 14:58:57 crc kubenswrapper[4922]: E0126 14:58:57.092917 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.024854 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 14:59:08 crc kubenswrapper[4922]: E0126 14:59:08.025795 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaec89f-007e-4ecf-a60f-f9f6729dfe13" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.025812 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaec89f-007e-4ecf-a60f-f9f6729dfe13" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 14:59:08 crc kubenswrapper[4922]: E0126 14:59:08.025835 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="registry-server" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.025843 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="registry-server" Jan 26 14:59:08 crc kubenswrapper[4922]: E0126 14:59:08.025864 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="extract-content" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.025872 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="extract-content" Jan 26 14:59:08 crc kubenswrapper[4922]: E0126 14:59:08.025893 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="extract-utilities" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.025899 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="extract-utilities" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.026120 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaec89f-007e-4ecf-a60f-f9f6729dfe13" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.026140 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ff2ec9-a1ce-4332-9cf6-36be7893125b" containerName="registry-server" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.027126 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.029659 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.041433 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.112092 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.116142 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.118814 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.140493 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.143526 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.148975 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-run\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149113 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-sys\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149159 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-config-data\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149180 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-dev\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149207 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh58c\" (UniqueName: \"kubernetes.io/projected/643689f7-b9d6-4f8a-a41b-a2a473973bd2-kube-api-access-dh58c\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149249 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-config-data-custom\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149282 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149307 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149312 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149356 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149562 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-nvme\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149831 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149934 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.149981 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.150427 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-lib-modules\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.150480 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-scripts\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.152086 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.172197 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252599 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-lib-modules\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252660 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-run\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252683 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-scripts\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252710 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252767 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252778 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-lib-modules\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252816 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-run\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252842 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.252961 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-run\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253012 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253049 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253116 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-sys\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253150 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-config-data\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253194 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-dev\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh58c\" (UniqueName: \"kubernetes.io/projected/643689f7-b9d6-4f8a-a41b-a2a473973bd2-kube-api-access-dh58c\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-config-data-custom\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253301 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253325 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253350 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253374 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253410 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253447 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253474 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-nvme\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253529 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253561 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253588 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253613 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253635 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253657 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253688 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253707 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253729 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253753 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxf6j\" (UniqueName: \"kubernetes.io/projected/24f527f0-1574-4733-8102-e412468ad8a6-kube-api-access-pxf6j\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253773 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253792 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253821 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253860 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253886 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253912 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253935 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcr4\" (UniqueName: \"kubernetes.io/projected/25f323f8-db37-45ee-8db5-e3248826d64e-kube-api-access-wwcr4\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253963 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253983 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254007 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254036 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254079 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254106 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254372 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-sys\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254643 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.254868 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-dev\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.253790 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.255089 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.255160 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.255209 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.255481 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/643689f7-b9d6-4f8a-a41b-a2a473973bd2-etc-nvme\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.271054 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-scripts\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.271083 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.271553 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-config-data\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.271673 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/643689f7-b9d6-4f8a-a41b-a2a473973bd2-config-data-custom\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.274104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh58c\" (UniqueName: \"kubernetes.io/projected/643689f7-b9d6-4f8a-a41b-a2a473973bd2-kube-api-access-dh58c\") pod \"cinder-backup-0\" (UID: \"643689f7-b9d6-4f8a-a41b-a2a473973bd2\") " pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.344415 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.356211 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.357108 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.357258 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.357418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.356368 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.357523 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.358058 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.358247 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.358379 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.358538 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.358687 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.358881 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.359039 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.360035 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.360679 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.360846 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.360987 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361311 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361421 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361457 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361475 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.359233 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361505 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361445 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.359404 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.361505 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.359509 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.359313 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.362194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.362519 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxf6j\" (UniqueName: \"kubernetes.io/projected/24f527f0-1574-4733-8102-e412468ad8a6-kube-api-access-pxf6j\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.362698 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.362890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.363143 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.362795 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.363635 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.364550 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.364675 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.364793 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwcr4\" (UniqueName: \"kubernetes.io/projected/25f323f8-db37-45ee-8db5-e3248826d64e-kube-api-access-wwcr4\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.364911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.365044 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.365197 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.367301 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.367508 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.367640 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.367787 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-run\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.366489 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.365191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.365576 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.368398 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.365548 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.368566 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.365606 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24f527f0-1574-4733-8102-e412468ad8a6-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.368790 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/25f323f8-db37-45ee-8db5-e3248826d64e-run\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.371725 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25f323f8-db37-45ee-8db5-e3248826d64e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.372638 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.374337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.377169 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24f527f0-1574-4733-8102-e412468ad8a6-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.381475 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxf6j\" (UniqueName: \"kubernetes.io/projected/24f527f0-1574-4733-8102-e412468ad8a6-kube-api-access-pxf6j\") pod \"cinder-volume-nfs-2-0\" (UID: \"24f527f0-1574-4733-8102-e412468ad8a6\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.397386 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwcr4\" (UniqueName: \"kubernetes.io/projected/25f323f8-db37-45ee-8db5-e3248826d64e-kube-api-access-wwcr4\") pod \"cinder-volume-nfs-0\" (UID: \"25f323f8-db37-45ee-8db5-e3248826d64e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.437085 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.469562 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:08 crc kubenswrapper[4922]: I0126 14:59:08.959358 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 14:59:09 crc kubenswrapper[4922]: I0126 14:59:09.084406 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 14:59:09 crc kubenswrapper[4922]: I0126 14:59:09.179692 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 14:59:09 crc kubenswrapper[4922]: W0126 14:59:09.180525 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24f527f0_1574_4733_8102_e412468ad8a6.slice/crio-c8fa24179ad1b02725fb47415097e1ed6c051469f1ba448e34dc5f474f92b042 WatchSource:0}: Error finding container c8fa24179ad1b02725fb47415097e1ed6c051469f1ba448e34dc5f474f92b042: Status 404 returned error can't find the container with id c8fa24179ad1b02725fb47415097e1ed6c051469f1ba448e34dc5f474f92b042 Jan 26 14:59:09 crc kubenswrapper[4922]: I0126 14:59:09.949564 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"25f323f8-db37-45ee-8db5-e3248826d64e","Type":"ContainerStarted","Data":"548c90a43e7d9d203a52826d7c5f9b4bebdddf2da1033b2d206b11d0817c1372"} Jan 26 14:59:09 crc kubenswrapper[4922]: I0126 14:59:09.951417 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"643689f7-b9d6-4f8a-a41b-a2a473973bd2","Type":"ContainerStarted","Data":"423e6bde6cfe59600001af63c9c38549f43c20156ac4505da2d60d48957327b9"} Jan 26 14:59:09 crc kubenswrapper[4922]: I0126 14:59:09.953050 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"24f527f0-1574-4733-8102-e412468ad8a6","Type":"ContainerStarted","Data":"c8fa24179ad1b02725fb47415097e1ed6c051469f1ba448e34dc5f474f92b042"} Jan 26 14:59:10 crc kubenswrapper[4922]: I0126 14:59:10.972028 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"24f527f0-1574-4733-8102-e412468ad8a6","Type":"ContainerStarted","Data":"047521a36e2b38effb7dbc82043ae103ed3c666fbab1c0d336092918aa6ecdb9"} Jan 26 14:59:10 crc kubenswrapper[4922]: I0126 14:59:10.974222 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"25f323f8-db37-45ee-8db5-e3248826d64e","Type":"ContainerStarted","Data":"f374694ce1c214c17e153827ace31d77db737f8a7e2bd1bf94f515e9995da057"} Jan 26 14:59:10 crc kubenswrapper[4922]: I0126 14:59:10.975351 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"643689f7-b9d6-4f8a-a41b-a2a473973bd2","Type":"ContainerStarted","Data":"a1bb7d51ba0ac56b66e3316d98c46fdddddd477de6754d7984c870570fd59942"} Jan 26 14:59:10 crc kubenswrapper[4922]: I0126 14:59:10.975380 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"643689f7-b9d6-4f8a-a41b-a2a473973bd2","Type":"ContainerStarted","Data":"6277dce735e428acf0e8290837a63af2f9aa5e333bb38ec55f01aaa1ba5e54e0"} Jan 26 14:59:10 crc kubenswrapper[4922]: I0126 14:59:10.999861 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=1.749044487 podStartE2EDuration="2.999842875s" podCreationTimestamp="2026-01-26 14:59:08 +0000 UTC" firstStartedPulling="2026-01-26 14:59:08.976143367 +0000 UTC m=+2966.178406149" lastFinishedPulling="2026-01-26 14:59:10.226941765 +0000 UTC m=+2967.429204537" observedRunningTime="2026-01-26 14:59:10.996676939 +0000 UTC m=+2968.198939711" watchObservedRunningTime="2026-01-26 14:59:10.999842875 +0000 UTC m=+2968.202105647" Jan 26 14:59:11 crc kubenswrapper[4922]: I0126 14:59:11.991425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"24f527f0-1574-4733-8102-e412468ad8a6","Type":"ContainerStarted","Data":"229d22953c34e163dd447b921556dcf530e12c8b55df9f532e3b72da0c50d60c"} Jan 26 14:59:11 crc kubenswrapper[4922]: I0126 14:59:11.994381 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"25f323f8-db37-45ee-8db5-e3248826d64e","Type":"ContainerStarted","Data":"4297631ad82dd2ee95bea51044378e7a978cab6c29b1c09c774c609d9517cf70"} Jan 26 14:59:12 crc kubenswrapper[4922]: I0126 14:59:12.029983 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.554828268 podStartE2EDuration="4.029949685s" podCreationTimestamp="2026-01-26 14:59:08 +0000 UTC" firstStartedPulling="2026-01-26 14:59:09.182734452 +0000 UTC m=+2966.384997244" lastFinishedPulling="2026-01-26 14:59:10.657855889 +0000 UTC m=+2967.860118661" observedRunningTime="2026-01-26 14:59:12.019858381 +0000 UTC m=+2969.222121163" watchObservedRunningTime="2026-01-26 14:59:12.029949685 +0000 UTC m=+2969.232212497" Jan 26 14:59:12 crc kubenswrapper[4922]: I0126 14:59:12.057438 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.4948111170000002 podStartE2EDuration="4.057417892s" podCreationTimestamp="2026-01-26 14:59:08 +0000 UTC" firstStartedPulling="2026-01-26 14:59:09.097701121 +0000 UTC m=+2966.299963893" lastFinishedPulling="2026-01-26 14:59:10.660307896 +0000 UTC m=+2967.862570668" observedRunningTime="2026-01-26 14:59:12.042099675 +0000 UTC m=+2969.244362467" watchObservedRunningTime="2026-01-26 14:59:12.057417892 +0000 UTC m=+2969.259680664" Jan 26 14:59:12 crc kubenswrapper[4922]: I0126 14:59:12.092290 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 14:59:12 crc kubenswrapper[4922]: E0126 14:59:12.092537 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:59:13 crc kubenswrapper[4922]: I0126 14:59:13.344821 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 14:59:13 crc kubenswrapper[4922]: I0126 14:59:13.437730 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:13 crc kubenswrapper[4922]: I0126 14:59:13.470390 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:18 crc kubenswrapper[4922]: I0126 14:59:18.554967 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 14:59:18 crc kubenswrapper[4922]: I0126 14:59:18.632290 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 26 14:59:18 crc kubenswrapper[4922]: I0126 14:59:18.844431 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 26 14:59:24 crc kubenswrapper[4922]: I0126 14:59:24.092812 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 14:59:24 crc kubenswrapper[4922]: E0126 14:59:24.093829 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:59:36 crc kubenswrapper[4922]: I0126 14:59:36.092673 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 14:59:36 crc kubenswrapper[4922]: E0126 14:59:36.093441 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 14:59:50 crc kubenswrapper[4922]: I0126 14:59:50.092393 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 14:59:50 crc kubenswrapper[4922]: E0126 14:59:50.093254 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.144497 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf"] Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.146666 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.149131 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.149813 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.162521 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf"] Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.273192 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpl6\" (UniqueName: \"kubernetes.io/projected/0a531389-b894-4e97-b997-c115d5e393e8-kube-api-access-6lpl6\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.273356 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a531389-b894-4e97-b997-c115d5e393e8-secret-volume\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.273401 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a531389-b894-4e97-b997-c115d5e393e8-config-volume\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.376813 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lpl6\" (UniqueName: \"kubernetes.io/projected/0a531389-b894-4e97-b997-c115d5e393e8-kube-api-access-6lpl6\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.376959 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a531389-b894-4e97-b997-c115d5e393e8-secret-volume\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.377004 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a531389-b894-4e97-b997-c115d5e393e8-config-volume\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.378117 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a531389-b894-4e97-b997-c115d5e393e8-config-volume\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.395717 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a531389-b894-4e97-b997-c115d5e393e8-secret-volume\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.397858 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lpl6\" (UniqueName: \"kubernetes.io/projected/0a531389-b894-4e97-b997-c115d5e393e8-kube-api-access-6lpl6\") pod \"collect-profiles-29490660-4qpnf\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.471553 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:00 crc kubenswrapper[4922]: I0126 15:00:00.932233 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf"] Jan 26 15:00:01 crc kubenswrapper[4922]: I0126 15:00:01.517039 4922 generic.go:334] "Generic (PLEG): container finished" podID="0a531389-b894-4e97-b997-c115d5e393e8" containerID="78d5b60a8be503da0b3149b32a0380cdc43dbea58edea13d6532b9607398719b" exitCode=0 Jan 26 15:00:01 crc kubenswrapper[4922]: I0126 15:00:01.517109 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" event={"ID":"0a531389-b894-4e97-b997-c115d5e393e8","Type":"ContainerDied","Data":"78d5b60a8be503da0b3149b32a0380cdc43dbea58edea13d6532b9607398719b"} Jan 26 15:00:01 crc kubenswrapper[4922]: I0126 15:00:01.517493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" event={"ID":"0a531389-b894-4e97-b997-c115d5e393e8","Type":"ContainerStarted","Data":"8f327077f45054e58b93c0fbed84d7884715bcddcbb1088ca41ab91fb9b38411"} Jan 26 15:00:02 crc kubenswrapper[4922]: I0126 15:00:02.092712 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:00:02 crc kubenswrapper[4922]: E0126 15:00:02.093186 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:00:02 crc kubenswrapper[4922]: I0126 15:00:02.879954 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.029020 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lpl6\" (UniqueName: \"kubernetes.io/projected/0a531389-b894-4e97-b997-c115d5e393e8-kube-api-access-6lpl6\") pod \"0a531389-b894-4e97-b997-c115d5e393e8\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.029101 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a531389-b894-4e97-b997-c115d5e393e8-config-volume\") pod \"0a531389-b894-4e97-b997-c115d5e393e8\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.029126 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a531389-b894-4e97-b997-c115d5e393e8-secret-volume\") pod \"0a531389-b894-4e97-b997-c115d5e393e8\" (UID: \"0a531389-b894-4e97-b997-c115d5e393e8\") " Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.030324 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a531389-b894-4e97-b997-c115d5e393e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a531389-b894-4e97-b997-c115d5e393e8" (UID: "0a531389-b894-4e97-b997-c115d5e393e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.031163 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a531389-b894-4e97-b997-c115d5e393e8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.037985 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a531389-b894-4e97-b997-c115d5e393e8-kube-api-access-6lpl6" (OuterVolumeSpecName: "kube-api-access-6lpl6") pod "0a531389-b894-4e97-b997-c115d5e393e8" (UID: "0a531389-b894-4e97-b997-c115d5e393e8"). InnerVolumeSpecName "kube-api-access-6lpl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.038277 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a531389-b894-4e97-b997-c115d5e393e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a531389-b894-4e97-b997-c115d5e393e8" (UID: "0a531389-b894-4e97-b997-c115d5e393e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.132823 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lpl6\" (UniqueName: \"kubernetes.io/projected/0a531389-b894-4e97-b997-c115d5e393e8-kube-api-access-6lpl6\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.132866 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a531389-b894-4e97-b997-c115d5e393e8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.535586 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" event={"ID":"0a531389-b894-4e97-b997-c115d5e393e8","Type":"ContainerDied","Data":"8f327077f45054e58b93c0fbed84d7884715bcddcbb1088ca41ab91fb9b38411"} Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.535625 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f327077f45054e58b93c0fbed84d7884715bcddcbb1088ca41ab91fb9b38411" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.535710 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf" Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.959457 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5"] Jan 26 15:00:03 crc kubenswrapper[4922]: I0126 15:00:03.968946 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490615-48gx5"] Jan 26 15:00:05 crc kubenswrapper[4922]: I0126 15:00:05.108348 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b5280f-d26b-4f56-bf87-b83f6e51ee10" path="/var/lib/kubelet/pods/04b5280f-d26b-4f56-bf87-b83f6e51ee10/volumes" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.027498 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.028430 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="prometheus" containerID="cri-o://6030b56f040caf8ee74a8cb47fb8055b8d271057e3dc42ba53563f728809542a" gracePeriod=600 Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.028498 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="thanos-sidecar" containerID="cri-o://6b81fcbf60f0aba8269d13051e5768e42e5c13475dd6d49f6087f84729ec1fcd" gracePeriod=600 Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.028528 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="config-reloader" containerID="cri-o://447de18641794d9020adf36fe8231ecdd0acea76d06e6eaa05e9d5723c535a11" gracePeriod=600 Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.617623 4922 generic.go:334] "Generic (PLEG): container finished" podID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerID="6b81fcbf60f0aba8269d13051e5768e42e5c13475dd6d49f6087f84729ec1fcd" exitCode=0 Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.617875 4922 generic.go:334] "Generic (PLEG): container finished" podID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerID="447de18641794d9020adf36fe8231ecdd0acea76d06e6eaa05e9d5723c535a11" exitCode=0 Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.617884 4922 generic.go:334] "Generic (PLEG): container finished" podID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerID="6030b56f040caf8ee74a8cb47fb8055b8d271057e3dc42ba53563f728809542a" exitCode=0 Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.617726 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerDied","Data":"6b81fcbf60f0aba8269d13051e5768e42e5c13475dd6d49f6087f84729ec1fcd"} Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.617926 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerDied","Data":"447de18641794d9020adf36fe8231ecdd0acea76d06e6eaa05e9d5723c535a11"} Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.617938 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerDied","Data":"6030b56f040caf8ee74a8cb47fb8055b8d271057e3dc42ba53563f728809542a"} Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.812009 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.917869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-config\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918139 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918270 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-1\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918354 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-thanos-prometheus-http-client-file\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918489 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsbxs\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-kube-api-access-wsbxs\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918667 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-0\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918804 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.918906 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-tls-assets\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.920192 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.920299 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.920439 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.920551 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8606e862-2e96-4827-9cb1-7c699e93e8a0-config-out\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.920666 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-2\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.920818 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-secret-combined-ca-bundle\") pod \"8606e862-2e96-4827-9cb1-7c699e93e8a0\" (UID: \"8606e862-2e96-4827-9cb1-7c699e93e8a0\") " Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.921621 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.924253 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.924664 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-kube-api-access-wsbxs" (OuterVolumeSpecName: "kube-api-access-wsbxs") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "kube-api-access-wsbxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.925012 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.925076 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-config" (OuterVolumeSpecName: "config") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.926233 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.926406 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.926886 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.927703 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.927890 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.928278 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8606e862-2e96-4827-9cb1-7c699e93e8a0-config-out" (OuterVolumeSpecName: "config-out") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.945493 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "pvc-89a18385-0704-44fa-a23b-cb95f8d108b3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4922]: I0126 15:00:11.998571 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config" (OuterVolumeSpecName: "web-config") pod "8606e862-2e96-4827-9cb1-7c699e93e8a0" (UID: "8606e862-2e96-4827-9cb1-7c699e93e8a0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.023944 4922 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.023974 4922 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024009 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") on node \"crc\" " Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024021 4922 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024031 4922 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8606e862-2e96-4827-9cb1-7c699e93e8a0-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024040 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024050 4922 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024070 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024080 4922 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024091 4922 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8606e862-2e96-4827-9cb1-7c699e93e8a0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024100 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsbxs\" (UniqueName: \"kubernetes.io/projected/8606e862-2e96-4827-9cb1-7c699e93e8a0-kube-api-access-wsbxs\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.024109 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8606e862-2e96-4827-9cb1-7c699e93e8a0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.048902 4922 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.049110 4922 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-89a18385-0704-44fa-a23b-cb95f8d108b3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3") on node "crc" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.126569 4922 reconciler_common.go:293] "Volume detached for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.631908 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8606e862-2e96-4827-9cb1-7c699e93e8a0","Type":"ContainerDied","Data":"6ad90e902c1f25153bd2eccf04fbcf9f9bb9fa211ae716c8af2106a3a32e5b4c"} Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.631973 4922 scope.go:117] "RemoveContainer" containerID="6b81fcbf60f0aba8269d13051e5768e42e5c13475dd6d49f6087f84729ec1fcd" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.632250 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.655897 4922 scope.go:117] "RemoveContainer" containerID="447de18641794d9020adf36fe8231ecdd0acea76d06e6eaa05e9d5723c535a11" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.682718 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.691382 4922 scope.go:117] "RemoveContainer" containerID="6030b56f040caf8ee74a8cb47fb8055b8d271057e3dc42ba53563f728809542a" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.696731 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.714454 4922 scope.go:117] "RemoveContainer" containerID="0f2597c1fce9cfcf343b11b6bf506c63e67b3781312bfe98d2963cd4241199d3" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.723770 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:00:12 crc kubenswrapper[4922]: E0126 15:00:12.724196 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="config-reloader" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724208 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="config-reloader" Jan 26 15:00:12 crc kubenswrapper[4922]: E0126 15:00:12.724246 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a531389-b894-4e97-b997-c115d5e393e8" containerName="collect-profiles" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724256 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a531389-b894-4e97-b997-c115d5e393e8" containerName="collect-profiles" Jan 26 15:00:12 crc kubenswrapper[4922]: E0126 15:00:12.724271 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="thanos-sidecar" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724279 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="thanos-sidecar" Jan 26 15:00:12 crc kubenswrapper[4922]: E0126 15:00:12.724293 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="init-config-reloader" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724300 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="init-config-reloader" Jan 26 15:00:12 crc kubenswrapper[4922]: E0126 15:00:12.724322 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="prometheus" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724329 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="prometheus" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724507 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a531389-b894-4e97-b997-c115d5e393e8" containerName="collect-profiles" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724524 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="prometheus" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724543 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="config-reloader" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.724552 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" containerName="thanos-sidecar" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.726533 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.728870 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.729095 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.730381 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.731131 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.731178 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.731298 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8jmw5" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.731438 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.748303 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.784433 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841256 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htp6w\" (UniqueName: \"kubernetes.io/projected/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-kube-api-access-htp6w\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841589 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841617 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841644 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841694 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.841944 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.842085 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.842277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.842352 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.842520 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.842601 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.842630 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.943871 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htp6w\" (UniqueName: \"kubernetes.io/projected/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-kube-api-access-htp6w\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.943918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.943938 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.943965 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.943987 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944015 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944139 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944200 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944219 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944260 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944286 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.944304 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.945038 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.945371 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.945613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.947613 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.947659 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b3c4d564f6fc84458e9d6100c084ffc8a5ec4c60a4c8efc38ff6415485a8e6e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.951192 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.951555 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.951954 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.952553 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.953451 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.953689 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.955519 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.960622 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.962936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htp6w\" (UniqueName: \"kubernetes.io/projected/6bcc6ecd-6484-4c77-9278-970bfe41f0c2-kube-api-access-htp6w\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:12 crc kubenswrapper[4922]: I0126 15:00:12.984818 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a18385-0704-44fa-a23b-cb95f8d108b3\") pod \"prometheus-metric-storage-0\" (UID: \"6bcc6ecd-6484-4c77-9278-970bfe41f0c2\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:13 crc kubenswrapper[4922]: I0126 15:00:13.102458 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8606e862-2e96-4827-9cb1-7c699e93e8a0" path="/var/lib/kubelet/pods/8606e862-2e96-4827-9cb1-7c699e93e8a0/volumes" Jan 26 15:00:13 crc kubenswrapper[4922]: I0126 15:00:13.106718 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:13 crc kubenswrapper[4922]: I0126 15:00:13.592366 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:00:13 crc kubenswrapper[4922]: W0126 15:00:13.600094 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bcc6ecd_6484_4c77_9278_970bfe41f0c2.slice/crio-110721bd6758f32741cc31e7698ba53684c66b7493b3cb6f77e622ec6db0fd49 WatchSource:0}: Error finding container 110721bd6758f32741cc31e7698ba53684c66b7493b3cb6f77e622ec6db0fd49: Status 404 returned error can't find the container with id 110721bd6758f32741cc31e7698ba53684c66b7493b3cb6f77e622ec6db0fd49 Jan 26 15:00:13 crc kubenswrapper[4922]: I0126 15:00:13.660357 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bcc6ecd-6484-4c77-9278-970bfe41f0c2","Type":"ContainerStarted","Data":"110721bd6758f32741cc31e7698ba53684c66b7493b3cb6f77e622ec6db0fd49"} Jan 26 15:00:14 crc kubenswrapper[4922]: I0126 15:00:14.092630 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:00:14 crc kubenswrapper[4922]: E0126 15:00:14.093252 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:00:17 crc kubenswrapper[4922]: I0126 15:00:17.700684 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bcc6ecd-6484-4c77-9278-970bfe41f0c2","Type":"ContainerStarted","Data":"ed194dabaca34c2577a75e0d9c424c98a9300f8650afb65b700cbd64d71ac270"} Jan 26 15:00:18 crc kubenswrapper[4922]: I0126 15:00:18.118297 4922 scope.go:117] "RemoveContainer" containerID="d416e50c2895f8a76223641ccdab2eb2bbd14d00ea82b5172b8c0ae2557fb5c1" Jan 26 15:00:25 crc kubenswrapper[4922]: I0126 15:00:25.286706 4922 generic.go:334] "Generic (PLEG): container finished" podID="6bcc6ecd-6484-4c77-9278-970bfe41f0c2" containerID="ed194dabaca34c2577a75e0d9c424c98a9300f8650afb65b700cbd64d71ac270" exitCode=0 Jan 26 15:00:25 crc kubenswrapper[4922]: I0126 15:00:25.287322 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bcc6ecd-6484-4c77-9278-970bfe41f0c2","Type":"ContainerDied","Data":"ed194dabaca34c2577a75e0d9c424c98a9300f8650afb65b700cbd64d71ac270"} Jan 26 15:00:26 crc kubenswrapper[4922]: I0126 15:00:26.296570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bcc6ecd-6484-4c77-9278-970bfe41f0c2","Type":"ContainerStarted","Data":"2052753080d83e246e1f196df68ee75d26d68516a445cf7de4eea26c969bf364"} Jan 26 15:00:27 crc kubenswrapper[4922]: I0126 15:00:27.092401 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:00:27 crc kubenswrapper[4922]: E0126 15:00:27.093243 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:00:29 crc kubenswrapper[4922]: I0126 15:00:29.327744 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bcc6ecd-6484-4c77-9278-970bfe41f0c2","Type":"ContainerStarted","Data":"1a365aff72da5e5f8af3c2ccfe28fcd5cc55f308337f2453157d93d5a266b589"} Jan 26 15:00:29 crc kubenswrapper[4922]: I0126 15:00:29.328216 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bcc6ecd-6484-4c77-9278-970bfe41f0c2","Type":"ContainerStarted","Data":"91247d501a0eda49d5d372a3451b3f1bc7e4287f3dbf04927eb8ed8d1c8416a4"} Jan 26 15:00:29 crc kubenswrapper[4922]: I0126 15:00:29.373912 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.373891146 podStartE2EDuration="17.373891146s" podCreationTimestamp="2026-01-26 15:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:00:29.363816322 +0000 UTC m=+3046.566079104" watchObservedRunningTime="2026-01-26 15:00:29.373891146 +0000 UTC m=+3046.576153918" Jan 26 15:00:33 crc kubenswrapper[4922]: I0126 15:00:33.108987 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:42 crc kubenswrapper[4922]: I0126 15:00:42.092301 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:00:42 crc kubenswrapper[4922]: E0126 15:00:42.093113 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:00:43 crc kubenswrapper[4922]: I0126 15:00:43.114863 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:43 crc kubenswrapper[4922]: I0126 15:00:43.125541 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:43 crc kubenswrapper[4922]: I0126 15:00:43.464696 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 15:00:53 crc kubenswrapper[4922]: I0126 15:00:53.922995 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dqn99"] Jan 26 15:00:53 crc kubenswrapper[4922]: I0126 15:00:53.927420 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:53 crc kubenswrapper[4922]: I0126 15:00:53.941863 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqn99"] Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.022607 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwxj\" (UniqueName: \"kubernetes.io/projected/f1c2869d-5790-418c-aa9e-6852da084d39-kube-api-access-bvwxj\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.022686 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-catalog-content\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.022750 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-utilities\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.124600 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-catalog-content\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.125160 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-catalog-content\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.125215 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-utilities\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.124753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-utilities\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.125393 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwxj\" (UniqueName: \"kubernetes.io/projected/f1c2869d-5790-418c-aa9e-6852da084d39-kube-api-access-bvwxj\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.144176 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwxj\" (UniqueName: \"kubernetes.io/projected/f1c2869d-5790-418c-aa9e-6852da084d39-kube-api-access-bvwxj\") pod \"community-operators-dqn99\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.261086 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:00:54 crc kubenswrapper[4922]: W0126 15:00:54.795257 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c2869d_5790_418c_aa9e_6852da084d39.slice/crio-dcc2f8234ccc83b3616f2aebae3643cc14157c53f31d037b39b1f4168c7a7bb0 WatchSource:0}: Error finding container dcc2f8234ccc83b3616f2aebae3643cc14157c53f31d037b39b1f4168c7a7bb0: Status 404 returned error can't find the container with id dcc2f8234ccc83b3616f2aebae3643cc14157c53f31d037b39b1f4168c7a7bb0 Jan 26 15:00:54 crc kubenswrapper[4922]: I0126 15:00:54.800355 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqn99"] Jan 26 15:00:55 crc kubenswrapper[4922]: I0126 15:00:55.095339 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:00:55 crc kubenswrapper[4922]: E0126 15:00:55.095909 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:00:55 crc kubenswrapper[4922]: I0126 15:00:55.587243 4922 generic.go:334] "Generic (PLEG): container finished" podID="f1c2869d-5790-418c-aa9e-6852da084d39" containerID="4d1f6e5f2405e6bcaff59da0329f10d9235b7c549ed4a815822294652fbaee71" exitCode=0 Jan 26 15:00:55 crc kubenswrapper[4922]: I0126 15:00:55.587351 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqn99" event={"ID":"f1c2869d-5790-418c-aa9e-6852da084d39","Type":"ContainerDied","Data":"4d1f6e5f2405e6bcaff59da0329f10d9235b7c549ed4a815822294652fbaee71"} Jan 26 15:00:55 crc kubenswrapper[4922]: I0126 15:00:55.587570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqn99" event={"ID":"f1c2869d-5790-418c-aa9e-6852da084d39","Type":"ContainerStarted","Data":"dcc2f8234ccc83b3616f2aebae3643cc14157c53f31d037b39b1f4168c7a7bb0"} Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.612737 4922 generic.go:334] "Generic (PLEG): container finished" podID="f1c2869d-5790-418c-aa9e-6852da084d39" containerID="855a462fb89fc82c5752942d9645dca00e6daff49b0801706353677a048601b6" exitCode=0 Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.612756 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqn99" event={"ID":"f1c2869d-5790-418c-aa9e-6852da084d39","Type":"ContainerDied","Data":"855a462fb89fc82c5752942d9645dca00e6daff49b0801706353677a048601b6"} Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.839875 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.841289 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.845583 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g7sds" Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.845583 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.845900 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.846085 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 15:00:57 crc kubenswrapper[4922]: I0126 15:00:57.848511 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001188 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001287 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001347 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdvs\" (UniqueName: \"kubernetes.io/projected/29bf7bdf-8c0e-4e1c-812d-1220cc968575-kube-api-access-5gdvs\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001395 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001507 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001526 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001565 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.001806 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-config-data\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.103958 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gdvs\" (UniqueName: \"kubernetes.io/projected/29bf7bdf-8c0e-4e1c-812d-1220cc968575-kube-api-access-5gdvs\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104197 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104225 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104253 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104267 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104294 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104346 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-config-data\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.104763 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.105109 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.105696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.106619 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.107890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-config-data\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.110911 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.112274 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.112704 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.123032 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gdvs\" (UniqueName: \"kubernetes.io/projected/29bf7bdf-8c0e-4e1c-812d-1220cc968575-kube-api-access-5gdvs\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.141284 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.163794 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.626047 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqn99" event={"ID":"f1c2869d-5790-418c-aa9e-6852da084d39","Type":"ContainerStarted","Data":"eed3de45d565209b41b07c1fafc00f3ff8a1dd4c1746ab42ee0999ba188b3bef"} Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.650264 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dqn99" podStartSLOduration=3.225595032 podStartE2EDuration="5.650208368s" podCreationTimestamp="2026-01-26 15:00:53 +0000 UTC" firstStartedPulling="2026-01-26 15:00:55.58943786 +0000 UTC m=+3072.791700632" lastFinishedPulling="2026-01-26 15:00:58.014051196 +0000 UTC m=+3075.216313968" observedRunningTime="2026-01-26 15:00:58.645058868 +0000 UTC m=+3075.847321660" watchObservedRunningTime="2026-01-26 15:00:58.650208368 +0000 UTC m=+3075.852471150" Jan 26 15:00:58 crc kubenswrapper[4922]: I0126 15:00:58.692349 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 15:00:59 crc kubenswrapper[4922]: I0126 15:00:59.639903 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"29bf7bdf-8c0e-4e1c-812d-1220cc968575","Type":"ContainerStarted","Data":"ddf9a39aa7074596d003517e1c797a726b555b3d043a435a2450c47755fa4074"} Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.148297 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490661-z2dhr"] Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.149675 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.158964 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490661-z2dhr"] Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.278701 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f24s5\" (UniqueName: \"kubernetes.io/projected/a1e08b81-5e31-4556-93f4-06430fed0f54-kube-api-access-f24s5\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.278815 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-fernet-keys\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.278875 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-config-data\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.278897 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-combined-ca-bundle\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.380655 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f24s5\" (UniqueName: \"kubernetes.io/projected/a1e08b81-5e31-4556-93f4-06430fed0f54-kube-api-access-f24s5\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.380777 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-fernet-keys\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.380819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-config-data\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.380842 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-combined-ca-bundle\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.387688 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-fernet-keys\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.387720 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-config-data\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.396051 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-combined-ca-bundle\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.400754 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f24s5\" (UniqueName: \"kubernetes.io/projected/a1e08b81-5e31-4556-93f4-06430fed0f54-kube-api-access-f24s5\") pod \"keystone-cron-29490661-z2dhr\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:00 crc kubenswrapper[4922]: I0126 15:01:00.480238 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:04 crc kubenswrapper[4922]: I0126 15:01:04.262258 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:01:04 crc kubenswrapper[4922]: I0126 15:01:04.262608 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:01:04 crc kubenswrapper[4922]: I0126 15:01:04.315093 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:01:04 crc kubenswrapper[4922]: I0126 15:01:04.780456 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:01:04 crc kubenswrapper[4922]: I0126 15:01:04.838799 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqn99"] Jan 26 15:01:06 crc kubenswrapper[4922]: I0126 15:01:06.738558 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dqn99" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="registry-server" containerID="cri-o://eed3de45d565209b41b07c1fafc00f3ff8a1dd4c1746ab42ee0999ba188b3bef" gracePeriod=2 Jan 26 15:01:07 crc kubenswrapper[4922]: I0126 15:01:07.750367 4922 generic.go:334] "Generic (PLEG): container finished" podID="f1c2869d-5790-418c-aa9e-6852da084d39" containerID="eed3de45d565209b41b07c1fafc00f3ff8a1dd4c1746ab42ee0999ba188b3bef" exitCode=0 Jan 26 15:01:07 crc kubenswrapper[4922]: I0126 15:01:07.750442 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqn99" event={"ID":"f1c2869d-5790-418c-aa9e-6852da084d39","Type":"ContainerDied","Data":"eed3de45d565209b41b07c1fafc00f3ff8a1dd4c1746ab42ee0999ba188b3bef"} Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.092788 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:01:10 crc kubenswrapper[4922]: E0126 15:01:10.093333 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.466949 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.606782 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-utilities\") pod \"f1c2869d-5790-418c-aa9e-6852da084d39\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.606894 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvwxj\" (UniqueName: \"kubernetes.io/projected/f1c2869d-5790-418c-aa9e-6852da084d39-kube-api-access-bvwxj\") pod \"f1c2869d-5790-418c-aa9e-6852da084d39\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.607056 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-catalog-content\") pod \"f1c2869d-5790-418c-aa9e-6852da084d39\" (UID: \"f1c2869d-5790-418c-aa9e-6852da084d39\") " Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.607664 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-utilities" (OuterVolumeSpecName: "utilities") pod "f1c2869d-5790-418c-aa9e-6852da084d39" (UID: "f1c2869d-5790-418c-aa9e-6852da084d39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.610956 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c2869d-5790-418c-aa9e-6852da084d39-kube-api-access-bvwxj" (OuterVolumeSpecName: "kube-api-access-bvwxj") pod "f1c2869d-5790-418c-aa9e-6852da084d39" (UID: "f1c2869d-5790-418c-aa9e-6852da084d39"). InnerVolumeSpecName "kube-api-access-bvwxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.659900 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1c2869d-5790-418c-aa9e-6852da084d39" (UID: "f1c2869d-5790-418c-aa9e-6852da084d39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.682900 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490661-z2dhr"] Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.709536 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvwxj\" (UniqueName: \"kubernetes.io/projected/f1c2869d-5790-418c-aa9e-6852da084d39-kube-api-access-bvwxj\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.709580 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.709590 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c2869d-5790-418c-aa9e-6852da084d39-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.788377 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqn99" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.788397 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqn99" event={"ID":"f1c2869d-5790-418c-aa9e-6852da084d39","Type":"ContainerDied","Data":"dcc2f8234ccc83b3616f2aebae3643cc14157c53f31d037b39b1f4168c7a7bb0"} Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.788480 4922 scope.go:117] "RemoveContainer" containerID="eed3de45d565209b41b07c1fafc00f3ff8a1dd4c1746ab42ee0999ba188b3bef" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.790032 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-z2dhr" event={"ID":"a1e08b81-5e31-4556-93f4-06430fed0f54","Type":"ContainerStarted","Data":"2a10729d7908814851e68957a83992ca23c735f240612c416d1e5499f2d21379"} Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.808187 4922 scope.go:117] "RemoveContainer" containerID="855a462fb89fc82c5752942d9645dca00e6daff49b0801706353677a048601b6" Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.824184 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqn99"] Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.832554 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dqn99"] Jan 26 15:01:10 crc kubenswrapper[4922]: I0126 15:01:10.846665 4922 scope.go:117] "RemoveContainer" containerID="4d1f6e5f2405e6bcaff59da0329f10d9235b7c549ed4a815822294652fbaee71" Jan 26 15:01:11 crc kubenswrapper[4922]: I0126 15:01:11.117715 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" path="/var/lib/kubelet/pods/f1c2869d-5790-418c-aa9e-6852da084d39/volumes" Jan 26 15:01:11 crc kubenswrapper[4922]: I0126 15:01:11.802539 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"29bf7bdf-8c0e-4e1c-812d-1220cc968575","Type":"ContainerStarted","Data":"48a1f50f4376e429475766362060f0e965e244f7e3df4999812311e773d1e25b"} Jan 26 15:01:11 crc kubenswrapper[4922]: I0126 15:01:11.808513 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-z2dhr" event={"ID":"a1e08b81-5e31-4556-93f4-06430fed0f54","Type":"ContainerStarted","Data":"1c9ad6a59162ebf81313e6142bf1787047ef4533fe7e5fa070f6b49121ec67bb"} Jan 26 15:01:11 crc kubenswrapper[4922]: I0126 15:01:11.836447 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.291117882 podStartE2EDuration="15.836425817s" podCreationTimestamp="2026-01-26 15:00:56 +0000 UTC" firstStartedPulling="2026-01-26 15:00:58.694976855 +0000 UTC m=+3075.897239637" lastFinishedPulling="2026-01-26 15:01:10.2402848 +0000 UTC m=+3087.442547572" observedRunningTime="2026-01-26 15:01:11.818784907 +0000 UTC m=+3089.021047679" watchObservedRunningTime="2026-01-26 15:01:11.836425817 +0000 UTC m=+3089.038688589" Jan 26 15:01:11 crc kubenswrapper[4922]: I0126 15:01:11.840523 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490661-z2dhr" podStartSLOduration=11.840507238 podStartE2EDuration="11.840507238s" podCreationTimestamp="2026-01-26 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:01:11.836615312 +0000 UTC m=+3089.038878084" watchObservedRunningTime="2026-01-26 15:01:11.840507238 +0000 UTC m=+3089.042770010" Jan 26 15:01:15 crc kubenswrapper[4922]: I0126 15:01:15.846625 4922 generic.go:334] "Generic (PLEG): container finished" podID="a1e08b81-5e31-4556-93f4-06430fed0f54" containerID="1c9ad6a59162ebf81313e6142bf1787047ef4533fe7e5fa070f6b49121ec67bb" exitCode=0 Jan 26 15:01:15 crc kubenswrapper[4922]: I0126 15:01:15.846739 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-z2dhr" event={"ID":"a1e08b81-5e31-4556-93f4-06430fed0f54","Type":"ContainerDied","Data":"1c9ad6a59162ebf81313e6142bf1787047ef4533fe7e5fa070f6b49121ec67bb"} Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.450369 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.587661 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-fernet-keys\") pod \"a1e08b81-5e31-4556-93f4-06430fed0f54\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.587902 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-combined-ca-bundle\") pod \"a1e08b81-5e31-4556-93f4-06430fed0f54\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.588026 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-config-data\") pod \"a1e08b81-5e31-4556-93f4-06430fed0f54\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.588058 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f24s5\" (UniqueName: \"kubernetes.io/projected/a1e08b81-5e31-4556-93f4-06430fed0f54-kube-api-access-f24s5\") pod \"a1e08b81-5e31-4556-93f4-06430fed0f54\" (UID: \"a1e08b81-5e31-4556-93f4-06430fed0f54\") " Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.594408 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a1e08b81-5e31-4556-93f4-06430fed0f54" (UID: "a1e08b81-5e31-4556-93f4-06430fed0f54"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.595274 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e08b81-5e31-4556-93f4-06430fed0f54-kube-api-access-f24s5" (OuterVolumeSpecName: "kube-api-access-f24s5") pod "a1e08b81-5e31-4556-93f4-06430fed0f54" (UID: "a1e08b81-5e31-4556-93f4-06430fed0f54"). InnerVolumeSpecName "kube-api-access-f24s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.626027 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e08b81-5e31-4556-93f4-06430fed0f54" (UID: "a1e08b81-5e31-4556-93f4-06430fed0f54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.650218 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-config-data" (OuterVolumeSpecName: "config-data") pod "a1e08b81-5e31-4556-93f4-06430fed0f54" (UID: "a1e08b81-5e31-4556-93f4-06430fed0f54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.690908 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.691193 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.691204 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e08b81-5e31-4556-93f4-06430fed0f54-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.691213 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f24s5\" (UniqueName: \"kubernetes.io/projected/a1e08b81-5e31-4556-93f4-06430fed0f54-kube-api-access-f24s5\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.867208 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490661-z2dhr" event={"ID":"a1e08b81-5e31-4556-93f4-06430fed0f54","Type":"ContainerDied","Data":"2a10729d7908814851e68957a83992ca23c735f240612c416d1e5499f2d21379"} Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.867257 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a10729d7908814851e68957a83992ca23c735f240612c416d1e5499f2d21379" Jan 26 15:01:17 crc kubenswrapper[4922]: I0126 15:01:17.867261 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490661-z2dhr" Jan 26 15:01:22 crc kubenswrapper[4922]: I0126 15:01:22.093078 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:01:22 crc kubenswrapper[4922]: E0126 15:01:22.093976 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:01:37 crc kubenswrapper[4922]: I0126 15:01:37.093281 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:01:37 crc kubenswrapper[4922]: E0126 15:01:37.094001 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:01:50 crc kubenswrapper[4922]: I0126 15:01:50.093152 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:01:50 crc kubenswrapper[4922]: E0126 15:01:50.094088 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:02:04 crc kubenswrapper[4922]: I0126 15:02:04.093144 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:02:04 crc kubenswrapper[4922]: E0126 15:02:04.094727 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:02:16 crc kubenswrapper[4922]: I0126 15:02:16.093548 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:02:16 crc kubenswrapper[4922]: E0126 15:02:16.094355 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:02:28 crc kubenswrapper[4922]: I0126 15:02:28.092725 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:02:28 crc kubenswrapper[4922]: E0126 15:02:28.093485 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:02:40 crc kubenswrapper[4922]: I0126 15:02:40.092664 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:02:40 crc kubenswrapper[4922]: E0126 15:02:40.093514 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.580724 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94h6j"] Jan 26 15:02:43 crc kubenswrapper[4922]: E0126 15:02:43.581912 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="extract-content" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.581931 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="extract-content" Jan 26 15:02:43 crc kubenswrapper[4922]: E0126 15:02:43.581950 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="extract-utilities" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.581959 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="extract-utilities" Jan 26 15:02:43 crc kubenswrapper[4922]: E0126 15:02:43.581979 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e08b81-5e31-4556-93f4-06430fed0f54" containerName="keystone-cron" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.581988 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e08b81-5e31-4556-93f4-06430fed0f54" containerName="keystone-cron" Jan 26 15:02:43 crc kubenswrapper[4922]: E0126 15:02:43.582003 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="registry-server" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.582012 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="registry-server" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.582320 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e08b81-5e31-4556-93f4-06430fed0f54" containerName="keystone-cron" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.582338 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c2869d-5790-418c-aa9e-6852da084d39" containerName="registry-server" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.600325 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.609264 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94h6j"] Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.733772 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-catalog-content\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.734423 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-utilities\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.734962 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92q8\" (UniqueName: \"kubernetes.io/projected/85b61718-e961-41e4-9035-8689eee9467d-kube-api-access-p92q8\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.837648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p92q8\" (UniqueName: \"kubernetes.io/projected/85b61718-e961-41e4-9035-8689eee9467d-kube-api-access-p92q8\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.837802 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-catalog-content\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.837866 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-utilities\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.838476 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-catalog-content\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.838540 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-utilities\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.868084 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p92q8\" (UniqueName: \"kubernetes.io/projected/85b61718-e961-41e4-9035-8689eee9467d-kube-api-access-p92q8\") pod \"redhat-operators-94h6j\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:43 crc kubenswrapper[4922]: I0126 15:02:43.931828 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:44 crc kubenswrapper[4922]: I0126 15:02:44.491477 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94h6j"] Jan 26 15:02:44 crc kubenswrapper[4922]: I0126 15:02:44.752317 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerStarted","Data":"7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6"} Jan 26 15:02:44 crc kubenswrapper[4922]: I0126 15:02:44.752366 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerStarted","Data":"c2541d529c050f4f8e5c23e8c13e1f6ee69424fac170f7bd26b94482f3497ae6"} Jan 26 15:02:45 crc kubenswrapper[4922]: I0126 15:02:45.765053 4922 generic.go:334] "Generic (PLEG): container finished" podID="85b61718-e961-41e4-9035-8689eee9467d" containerID="7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6" exitCode=0 Jan 26 15:02:45 crc kubenswrapper[4922]: I0126 15:02:45.765126 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerDied","Data":"7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6"} Jan 26 15:02:45 crc kubenswrapper[4922]: I0126 15:02:45.765469 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerStarted","Data":"6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab"} Jan 26 15:02:47 crc kubenswrapper[4922]: I0126 15:02:47.790812 4922 generic.go:334] "Generic (PLEG): container finished" podID="85b61718-e961-41e4-9035-8689eee9467d" containerID="6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab" exitCode=0 Jan 26 15:02:47 crc kubenswrapper[4922]: I0126 15:02:47.790878 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerDied","Data":"6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab"} Jan 26 15:02:50 crc kubenswrapper[4922]: I0126 15:02:50.835222 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerStarted","Data":"0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12"} Jan 26 15:02:50 crc kubenswrapper[4922]: I0126 15:02:50.855354 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94h6j" podStartSLOduration=2.239616778 podStartE2EDuration="7.855336663s" podCreationTimestamp="2026-01-26 15:02:43 +0000 UTC" firstStartedPulling="2026-01-26 15:02:44.754257629 +0000 UTC m=+3181.956520401" lastFinishedPulling="2026-01-26 15:02:50.369977514 +0000 UTC m=+3187.572240286" observedRunningTime="2026-01-26 15:02:50.854268484 +0000 UTC m=+3188.056531256" watchObservedRunningTime="2026-01-26 15:02:50.855336663 +0000 UTC m=+3188.057599435" Jan 26 15:02:53 crc kubenswrapper[4922]: I0126 15:02:53.932931 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:53 crc kubenswrapper[4922]: I0126 15:02:53.933999 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:02:54 crc kubenswrapper[4922]: I0126 15:02:54.983369 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94h6j" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="registry-server" probeResult="failure" output=< Jan 26 15:02:54 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 15:02:54 crc kubenswrapper[4922]: > Jan 26 15:02:55 crc kubenswrapper[4922]: I0126 15:02:55.092525 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:02:55 crc kubenswrapper[4922]: E0126 15:02:55.092958 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:03:03 crc kubenswrapper[4922]: I0126 15:03:03.982891 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:03:04 crc kubenswrapper[4922]: I0126 15:03:04.040861 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:03:04 crc kubenswrapper[4922]: I0126 15:03:04.221855 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94h6j"] Jan 26 15:03:05 crc kubenswrapper[4922]: I0126 15:03:05.982347 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94h6j" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="registry-server" containerID="cri-o://0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12" gracePeriod=2 Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.474917 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.556493 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p92q8\" (UniqueName: \"kubernetes.io/projected/85b61718-e961-41e4-9035-8689eee9467d-kube-api-access-p92q8\") pod \"85b61718-e961-41e4-9035-8689eee9467d\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.556609 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-catalog-content\") pod \"85b61718-e961-41e4-9035-8689eee9467d\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.556704 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-utilities\") pod \"85b61718-e961-41e4-9035-8689eee9467d\" (UID: \"85b61718-e961-41e4-9035-8689eee9467d\") " Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.557585 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-utilities" (OuterVolumeSpecName: "utilities") pod "85b61718-e961-41e4-9035-8689eee9467d" (UID: "85b61718-e961-41e4-9035-8689eee9467d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.564053 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b61718-e961-41e4-9035-8689eee9467d-kube-api-access-p92q8" (OuterVolumeSpecName: "kube-api-access-p92q8") pod "85b61718-e961-41e4-9035-8689eee9467d" (UID: "85b61718-e961-41e4-9035-8689eee9467d"). InnerVolumeSpecName "kube-api-access-p92q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.659369 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p92q8\" (UniqueName: \"kubernetes.io/projected/85b61718-e961-41e4-9035-8689eee9467d-kube-api-access-p92q8\") on node \"crc\" DevicePath \"\"" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.659422 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.677642 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85b61718-e961-41e4-9035-8689eee9467d" (UID: "85b61718-e961-41e4-9035-8689eee9467d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.761778 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b61718-e961-41e4-9035-8689eee9467d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.997553 4922 generic.go:334] "Generic (PLEG): container finished" podID="85b61718-e961-41e4-9035-8689eee9467d" containerID="0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12" exitCode=0 Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.997624 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerDied","Data":"0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12"} Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.997674 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94h6j" event={"ID":"85b61718-e961-41e4-9035-8689eee9467d","Type":"ContainerDied","Data":"c2541d529c050f4f8e5c23e8c13e1f6ee69424fac170f7bd26b94482f3497ae6"} Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.997682 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94h6j" Jan 26 15:03:06 crc kubenswrapper[4922]: I0126 15:03:06.997702 4922 scope.go:117] "RemoveContainer" containerID="0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.026354 4922 scope.go:117] "RemoveContainer" containerID="6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.044389 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94h6j"] Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.055752 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94h6j"] Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.057640 4922 scope.go:117] "RemoveContainer" containerID="7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.106674 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b61718-e961-41e4-9035-8689eee9467d" path="/var/lib/kubelet/pods/85b61718-e961-41e4-9035-8689eee9467d/volumes" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.109848 4922 scope.go:117] "RemoveContainer" containerID="0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12" Jan 26 15:03:07 crc kubenswrapper[4922]: E0126 15:03:07.110386 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12\": container with ID starting with 0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12 not found: ID does not exist" containerID="0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.110492 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12"} err="failed to get container status \"0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12\": rpc error: code = NotFound desc = could not find container \"0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12\": container with ID starting with 0c92f729ea167a8db6b407fe25c4530b8d597d44532881c91b48aa885c665e12 not found: ID does not exist" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.110538 4922 scope.go:117] "RemoveContainer" containerID="6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab" Jan 26 15:03:07 crc kubenswrapper[4922]: E0126 15:03:07.111252 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab\": container with ID starting with 6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab not found: ID does not exist" containerID="6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.111306 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab"} err="failed to get container status \"6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab\": rpc error: code = NotFound desc = could not find container \"6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab\": container with ID starting with 6c1a24a40c6742f243c548c713486912f24785c4868dceaf10443bfee8309dab not found: ID does not exist" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.111340 4922 scope.go:117] "RemoveContainer" containerID="7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6" Jan 26 15:03:07 crc kubenswrapper[4922]: E0126 15:03:07.111632 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6\": container with ID starting with 7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6 not found: ID does not exist" containerID="7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6" Jan 26 15:03:07 crc kubenswrapper[4922]: I0126 15:03:07.111671 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6"} err="failed to get container status \"7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6\": rpc error: code = NotFound desc = could not find container \"7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6\": container with ID starting with 7f33bf4fb8d4429539efc0c5030b8c291c063bd1a4ba793e7dff1c85aecdbaf6 not found: ID does not exist" Jan 26 15:03:09 crc kubenswrapper[4922]: I0126 15:03:09.092711 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:03:09 crc kubenswrapper[4922]: E0126 15:03:09.093556 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:03:23 crc kubenswrapper[4922]: I0126 15:03:23.098756 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:03:23 crc kubenswrapper[4922]: E0126 15:03:23.099473 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:03:35 crc kubenswrapper[4922]: I0126 15:03:35.093934 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:03:35 crc kubenswrapper[4922]: E0126 15:03:35.094808 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:03:49 crc kubenswrapper[4922]: I0126 15:03:49.093240 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:03:50 crc kubenswrapper[4922]: I0126 15:03:50.547450 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"7561151417e9b46f4159336a4c990b1ba45ba0f4948800b608e954f6caba6a6e"} Jan 26 15:06:11 crc kubenswrapper[4922]: I0126 15:06:11.307057 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:06:11 crc kubenswrapper[4922]: I0126 15:06:11.307767 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:06:41 crc kubenswrapper[4922]: I0126 15:06:41.306450 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:06:41 crc kubenswrapper[4922]: I0126 15:06:41.306975 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.306791 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.307556 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.307624 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.308741 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7561151417e9b46f4159336a4c990b1ba45ba0f4948800b608e954f6caba6a6e"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.308838 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://7561151417e9b46f4159336a4c990b1ba45ba0f4948800b608e954f6caba6a6e" gracePeriod=600 Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.746100 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="7561151417e9b46f4159336a4c990b1ba45ba0f4948800b608e954f6caba6a6e" exitCode=0 Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.746160 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"7561151417e9b46f4159336a4c990b1ba45ba0f4948800b608e954f6caba6a6e"} Jan 26 15:07:11 crc kubenswrapper[4922]: I0126 15:07:11.746487 4922 scope.go:117] "RemoveContainer" containerID="d31e68f0c9e8791ca6ddc0758e2b9392eb00e70345b269b98a2d3c7016489cd3" Jan 26 15:07:12 crc kubenswrapper[4922]: I0126 15:07:12.756686 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd"} Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.258462 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x28r7"] Jan 26 15:08:21 crc kubenswrapper[4922]: E0126 15:08:21.260835 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="extract-utilities" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.260868 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="extract-utilities" Jan 26 15:08:21 crc kubenswrapper[4922]: E0126 15:08:21.260883 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="extract-content" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.260892 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="extract-content" Jan 26 15:08:21 crc kubenswrapper[4922]: E0126 15:08:21.260930 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="registry-server" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.260939 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="registry-server" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.261281 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b61718-e961-41e4-9035-8689eee9467d" containerName="registry-server" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.263515 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.276651 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x28r7"] Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.376544 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-catalog-content\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.376615 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnl5\" (UniqueName: \"kubernetes.io/projected/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-kube-api-access-7dnl5\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.376676 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-utilities\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.478652 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-catalog-content\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.478712 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnl5\" (UniqueName: \"kubernetes.io/projected/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-kube-api-access-7dnl5\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.478759 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-utilities\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.479209 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-utilities\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.479355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-catalog-content\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.506877 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnl5\" (UniqueName: \"kubernetes.io/projected/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-kube-api-access-7dnl5\") pod \"redhat-marketplace-x28r7\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:21 crc kubenswrapper[4922]: I0126 15:08:21.596287 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:22 crc kubenswrapper[4922]: I0126 15:08:22.112897 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x28r7"] Jan 26 15:08:22 crc kubenswrapper[4922]: E0126 15:08:22.498620 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ca6da2c_e0c6_40de_9632_bb6b05c6eb07.slice/crio-conmon-a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:08:22 crc kubenswrapper[4922]: I0126 15:08:22.576557 4922 generic.go:334] "Generic (PLEG): container finished" podID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerID="a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca" exitCode=0 Jan 26 15:08:22 crc kubenswrapper[4922]: I0126 15:08:22.576623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x28r7" event={"ID":"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07","Type":"ContainerDied","Data":"a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca"} Jan 26 15:08:22 crc kubenswrapper[4922]: I0126 15:08:22.576942 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x28r7" event={"ID":"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07","Type":"ContainerStarted","Data":"744b663050e41334a98372052735a7c605e49004ef987ad2d7fcb1323ec31478"} Jan 26 15:08:22 crc kubenswrapper[4922]: I0126 15:08:22.578814 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.597707 4922 generic.go:334] "Generic (PLEG): container finished" podID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerID="2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f" exitCode=0 Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.597760 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x28r7" event={"ID":"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07","Type":"ContainerDied","Data":"2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f"} Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.653091 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lj5hs"] Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.655594 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.682770 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj5hs"] Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.846819 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4fvx\" (UniqueName: \"kubernetes.io/projected/48e274dc-91fe-426f-984e-1c4150cf6060-kube-api-access-k4fvx\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.846909 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-utilities\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.846965 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-catalog-content\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.949302 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4fvx\" (UniqueName: \"kubernetes.io/projected/48e274dc-91fe-426f-984e-1c4150cf6060-kube-api-access-k4fvx\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.949392 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-utilities\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.949448 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-catalog-content\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.949890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-utilities\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.949925 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-catalog-content\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:24 crc kubenswrapper[4922]: I0126 15:08:24.988189 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4fvx\" (UniqueName: \"kubernetes.io/projected/48e274dc-91fe-426f-984e-1c4150cf6060-kube-api-access-k4fvx\") pod \"certified-operators-lj5hs\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:25 crc kubenswrapper[4922]: I0126 15:08:25.278476 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:25 crc kubenswrapper[4922]: I0126 15:08:25.612943 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x28r7" event={"ID":"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07","Type":"ContainerStarted","Data":"aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893"} Jan 26 15:08:25 crc kubenswrapper[4922]: I0126 15:08:25.663751 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x28r7" podStartSLOduration=2.223432458 podStartE2EDuration="4.663728645s" podCreationTimestamp="2026-01-26 15:08:21 +0000 UTC" firstStartedPulling="2026-01-26 15:08:22.578572902 +0000 UTC m=+3519.780835674" lastFinishedPulling="2026-01-26 15:08:25.018869089 +0000 UTC m=+3522.221131861" observedRunningTime="2026-01-26 15:08:25.655714708 +0000 UTC m=+3522.857977480" watchObservedRunningTime="2026-01-26 15:08:25.663728645 +0000 UTC m=+3522.865991417" Jan 26 15:08:25 crc kubenswrapper[4922]: I0126 15:08:25.817160 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj5hs"] Jan 26 15:08:26 crc kubenswrapper[4922]: I0126 15:08:26.623138 4922 generic.go:334] "Generic (PLEG): container finished" podID="48e274dc-91fe-426f-984e-1c4150cf6060" containerID="5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce" exitCode=0 Jan 26 15:08:26 crc kubenswrapper[4922]: I0126 15:08:26.623230 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerDied","Data":"5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce"} Jan 26 15:08:26 crc kubenswrapper[4922]: I0126 15:08:26.623954 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerStarted","Data":"119ab006f1157bf4e45f96403fde5e37c579c0f767687ce327c4262d2e7c157d"} Jan 26 15:08:27 crc kubenswrapper[4922]: I0126 15:08:27.638501 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerStarted","Data":"ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1"} Jan 26 15:08:29 crc kubenswrapper[4922]: I0126 15:08:29.662166 4922 generic.go:334] "Generic (PLEG): container finished" podID="48e274dc-91fe-426f-984e-1c4150cf6060" containerID="ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1" exitCode=0 Jan 26 15:08:29 crc kubenswrapper[4922]: I0126 15:08:29.662260 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerDied","Data":"ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1"} Jan 26 15:08:30 crc kubenswrapper[4922]: I0126 15:08:30.679182 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerStarted","Data":"739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539"} Jan 26 15:08:30 crc kubenswrapper[4922]: I0126 15:08:30.732787 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lj5hs" podStartSLOduration=3.271327511 podStartE2EDuration="6.732744632s" podCreationTimestamp="2026-01-26 15:08:24 +0000 UTC" firstStartedPulling="2026-01-26 15:08:26.624762399 +0000 UTC m=+3523.827025171" lastFinishedPulling="2026-01-26 15:08:30.08617951 +0000 UTC m=+3527.288442292" observedRunningTime="2026-01-26 15:08:30.719271776 +0000 UTC m=+3527.921534558" watchObservedRunningTime="2026-01-26 15:08:30.732744632 +0000 UTC m=+3527.935007404" Jan 26 15:08:31 crc kubenswrapper[4922]: I0126 15:08:31.596588 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:31 crc kubenswrapper[4922]: I0126 15:08:31.596982 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:31 crc kubenswrapper[4922]: I0126 15:08:31.673213 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:31 crc kubenswrapper[4922]: I0126 15:08:31.741640 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:32 crc kubenswrapper[4922]: I0126 15:08:32.660636 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x28r7"] Jan 26 15:08:33 crc kubenswrapper[4922]: I0126 15:08:33.708648 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x28r7" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="registry-server" containerID="cri-o://aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893" gracePeriod=2 Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.232714 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.304907 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-catalog-content\") pod \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.305114 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnl5\" (UniqueName: \"kubernetes.io/projected/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-kube-api-access-7dnl5\") pod \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.305171 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-utilities\") pod \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\" (UID: \"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07\") " Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.306455 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-utilities" (OuterVolumeSpecName: "utilities") pod "2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" (UID: "2ca6da2c-e0c6-40de-9632-bb6b05c6eb07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.312673 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-kube-api-access-7dnl5" (OuterVolumeSpecName: "kube-api-access-7dnl5") pod "2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" (UID: "2ca6da2c-e0c6-40de-9632-bb6b05c6eb07"). InnerVolumeSpecName "kube-api-access-7dnl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.336360 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" (UID: "2ca6da2c-e0c6-40de-9632-bb6b05c6eb07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.406848 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.406883 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dnl5\" (UniqueName: \"kubernetes.io/projected/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-kube-api-access-7dnl5\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.406898 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.719803 4922 generic.go:334] "Generic (PLEG): container finished" podID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerID="aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893" exitCode=0 Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.719846 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x28r7" event={"ID":"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07","Type":"ContainerDied","Data":"aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893"} Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.719875 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x28r7" event={"ID":"2ca6da2c-e0c6-40de-9632-bb6b05c6eb07","Type":"ContainerDied","Data":"744b663050e41334a98372052735a7c605e49004ef987ad2d7fcb1323ec31478"} Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.719879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x28r7" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.719892 4922 scope.go:117] "RemoveContainer" containerID="aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.752851 4922 scope.go:117] "RemoveContainer" containerID="2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.753130 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x28r7"] Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.762148 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x28r7"] Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.797758 4922 scope.go:117] "RemoveContainer" containerID="a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.827407 4922 scope.go:117] "RemoveContainer" containerID="aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893" Jan 26 15:08:34 crc kubenswrapper[4922]: E0126 15:08:34.829024 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893\": container with ID starting with aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893 not found: ID does not exist" containerID="aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.829258 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893"} err="failed to get container status \"aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893\": rpc error: code = NotFound desc = could not find container \"aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893\": container with ID starting with aed626e482eda807eb170742f8631de2828b02dfdf32af26145d362401d86893 not found: ID does not exist" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.829385 4922 scope.go:117] "RemoveContainer" containerID="2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f" Jan 26 15:08:34 crc kubenswrapper[4922]: E0126 15:08:34.830733 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f\": container with ID starting with 2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f not found: ID does not exist" containerID="2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.830797 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f"} err="failed to get container status \"2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f\": rpc error: code = NotFound desc = could not find container \"2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f\": container with ID starting with 2cd8714764f8098b6781dda4360108efab62c220c44ac65f3d3a6ebee1e57d4f not found: ID does not exist" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.830839 4922 scope.go:117] "RemoveContainer" containerID="a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca" Jan 26 15:08:34 crc kubenswrapper[4922]: E0126 15:08:34.831252 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca\": container with ID starting with a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca not found: ID does not exist" containerID="a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca" Jan 26 15:08:34 crc kubenswrapper[4922]: I0126 15:08:34.831298 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca"} err="failed to get container status \"a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca\": rpc error: code = NotFound desc = could not find container \"a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca\": container with ID starting with a8dd3a69c5d3bb2417508ed438aed01607cac0865e946987a12271bf85a6eeca not found: ID does not exist" Jan 26 15:08:35 crc kubenswrapper[4922]: I0126 15:08:35.110238 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" path="/var/lib/kubelet/pods/2ca6da2c-e0c6-40de-9632-bb6b05c6eb07/volumes" Jan 26 15:08:35 crc kubenswrapper[4922]: I0126 15:08:35.279122 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:35 crc kubenswrapper[4922]: I0126 15:08:35.279579 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:35 crc kubenswrapper[4922]: I0126 15:08:35.343267 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:35 crc kubenswrapper[4922]: I0126 15:08:35.811236 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:37 crc kubenswrapper[4922]: I0126 15:08:37.244060 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj5hs"] Jan 26 15:08:38 crc kubenswrapper[4922]: I0126 15:08:38.784179 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lj5hs" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="registry-server" containerID="cri-o://739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539" gracePeriod=2 Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.259736 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.416705 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-catalog-content\") pod \"48e274dc-91fe-426f-984e-1c4150cf6060\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.416798 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-utilities\") pod \"48e274dc-91fe-426f-984e-1c4150cf6060\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.416955 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4fvx\" (UniqueName: \"kubernetes.io/projected/48e274dc-91fe-426f-984e-1c4150cf6060-kube-api-access-k4fvx\") pod \"48e274dc-91fe-426f-984e-1c4150cf6060\" (UID: \"48e274dc-91fe-426f-984e-1c4150cf6060\") " Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.418033 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-utilities" (OuterVolumeSpecName: "utilities") pod "48e274dc-91fe-426f-984e-1c4150cf6060" (UID: "48e274dc-91fe-426f-984e-1c4150cf6060"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.422672 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e274dc-91fe-426f-984e-1c4150cf6060-kube-api-access-k4fvx" (OuterVolumeSpecName: "kube-api-access-k4fvx") pod "48e274dc-91fe-426f-984e-1c4150cf6060" (UID: "48e274dc-91fe-426f-984e-1c4150cf6060"). InnerVolumeSpecName "kube-api-access-k4fvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.467487 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48e274dc-91fe-426f-984e-1c4150cf6060" (UID: "48e274dc-91fe-426f-984e-1c4150cf6060"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.519347 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.519383 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e274dc-91fe-426f-984e-1c4150cf6060-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.519394 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4fvx\" (UniqueName: \"kubernetes.io/projected/48e274dc-91fe-426f-984e-1c4150cf6060-kube-api-access-k4fvx\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.795820 4922 generic.go:334] "Generic (PLEG): container finished" podID="48e274dc-91fe-426f-984e-1c4150cf6060" containerID="739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539" exitCode=0 Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.795874 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj5hs" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.795873 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerDied","Data":"739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539"} Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.795988 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj5hs" event={"ID":"48e274dc-91fe-426f-984e-1c4150cf6060","Type":"ContainerDied","Data":"119ab006f1157bf4e45f96403fde5e37c579c0f767687ce327c4262d2e7c157d"} Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.796015 4922 scope.go:117] "RemoveContainer" containerID="739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.819393 4922 scope.go:117] "RemoveContainer" containerID="ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.832965 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj5hs"] Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.841236 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lj5hs"] Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.862960 4922 scope.go:117] "RemoveContainer" containerID="5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.917946 4922 scope.go:117] "RemoveContainer" containerID="739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539" Jan 26 15:08:39 crc kubenswrapper[4922]: E0126 15:08:39.918550 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539\": container with ID starting with 739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539 not found: ID does not exist" containerID="739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.918608 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539"} err="failed to get container status \"739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539\": rpc error: code = NotFound desc = could not find container \"739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539\": container with ID starting with 739f0e67d5325a0122cc640aa77ee60a0ac0658c519536ea31173eb51e85b539 not found: ID does not exist" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.918639 4922 scope.go:117] "RemoveContainer" containerID="ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1" Jan 26 15:08:39 crc kubenswrapper[4922]: E0126 15:08:39.919112 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1\": container with ID starting with ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1 not found: ID does not exist" containerID="ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.919177 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1"} err="failed to get container status \"ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1\": rpc error: code = NotFound desc = could not find container \"ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1\": container with ID starting with ea1db6be34e5968bc21a5494efa7fe961f395cbbe675b80d0b1567954f317eb1 not found: ID does not exist" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.919211 4922 scope.go:117] "RemoveContainer" containerID="5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce" Jan 26 15:08:39 crc kubenswrapper[4922]: E0126 15:08:39.920108 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce\": container with ID starting with 5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce not found: ID does not exist" containerID="5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce" Jan 26 15:08:39 crc kubenswrapper[4922]: I0126 15:08:39.920143 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce"} err="failed to get container status \"5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce\": rpc error: code = NotFound desc = could not find container \"5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce\": container with ID starting with 5ae31bf0fa717b1af97a8987d27d1c93bc8b56a94bbfdb6e90ec18a76d81a7ce not found: ID does not exist" Jan 26 15:08:41 crc kubenswrapper[4922]: I0126 15:08:41.112482 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" path="/var/lib/kubelet/pods/48e274dc-91fe-426f-984e-1c4150cf6060/volumes" Jan 26 15:09:11 crc kubenswrapper[4922]: I0126 15:09:11.306533 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:09:11 crc kubenswrapper[4922]: I0126 15:09:11.307050 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:09:41 crc kubenswrapper[4922]: I0126 15:09:41.306622 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:09:41 crc kubenswrapper[4922]: I0126 15:09:41.307155 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.308380 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.309002 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.309161 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.310205 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.310285 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" gracePeriod=600 Jan 26 15:10:11 crc kubenswrapper[4922]: E0126 15:10:11.436876 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.782178 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" exitCode=0 Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.782227 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd"} Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.782301 4922 scope.go:117] "RemoveContainer" containerID="7561151417e9b46f4159336a4c990b1ba45ba0f4948800b608e954f6caba6a6e" Jan 26 15:10:11 crc kubenswrapper[4922]: I0126 15:10:11.783525 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:10:11 crc kubenswrapper[4922]: E0126 15:10:11.784210 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:10:23 crc kubenswrapper[4922]: I0126 15:10:23.099272 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:10:23 crc kubenswrapper[4922]: E0126 15:10:23.100188 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:10:35 crc kubenswrapper[4922]: I0126 15:10:35.093098 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:10:35 crc kubenswrapper[4922]: E0126 15:10:35.095291 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:10:50 crc kubenswrapper[4922]: I0126 15:10:50.093146 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:10:50 crc kubenswrapper[4922]: E0126 15:10:50.094121 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:11:04 crc kubenswrapper[4922]: I0126 15:11:04.092991 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:11:04 crc kubenswrapper[4922]: E0126 15:11:04.093919 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:11:19 crc kubenswrapper[4922]: I0126 15:11:19.097398 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:11:19 crc kubenswrapper[4922]: E0126 15:11:19.098308 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.949893 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2gwq"] Jan 26 15:11:20 crc kubenswrapper[4922]: E0126 15:11:20.950334 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="extract-content" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950348 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="extract-content" Jan 26 15:11:20 crc kubenswrapper[4922]: E0126 15:11:20.950383 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="extract-content" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950389 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="extract-content" Jan 26 15:11:20 crc kubenswrapper[4922]: E0126 15:11:20.950401 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="registry-server" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950408 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="registry-server" Jan 26 15:11:20 crc kubenswrapper[4922]: E0126 15:11:20.950422 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="extract-utilities" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950428 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="extract-utilities" Jan 26 15:11:20 crc kubenswrapper[4922]: E0126 15:11:20.950436 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="extract-utilities" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950442 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="extract-utilities" Jan 26 15:11:20 crc kubenswrapper[4922]: E0126 15:11:20.950452 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="registry-server" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950458 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="registry-server" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950644 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca6da2c-e0c6-40de-9632-bb6b05c6eb07" containerName="registry-server" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.950665 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e274dc-91fe-426f-984e-1c4150cf6060" containerName="registry-server" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.952193 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.973478 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2gwq"] Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.978157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xglp\" (UniqueName: \"kubernetes.io/projected/e57f87ec-2866-4694-b3f4-0907ca749e1e-kube-api-access-6xglp\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.978225 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57f87ec-2866-4694-b3f4-0907ca749e1e-catalog-content\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:20 crc kubenswrapper[4922]: I0126 15:11:20.978331 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57f87ec-2866-4694-b3f4-0907ca749e1e-utilities\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.080970 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xglp\" (UniqueName: \"kubernetes.io/projected/e57f87ec-2866-4694-b3f4-0907ca749e1e-kube-api-access-6xglp\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.081039 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57f87ec-2866-4694-b3f4-0907ca749e1e-catalog-content\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.081166 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57f87ec-2866-4694-b3f4-0907ca749e1e-utilities\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.081923 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57f87ec-2866-4694-b3f4-0907ca749e1e-utilities\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.081974 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57f87ec-2866-4694-b3f4-0907ca749e1e-catalog-content\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.108877 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xglp\" (UniqueName: \"kubernetes.io/projected/e57f87ec-2866-4694-b3f4-0907ca749e1e-kube-api-access-6xglp\") pod \"community-operators-q2gwq\" (UID: \"e57f87ec-2866-4694-b3f4-0907ca749e1e\") " pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:21 crc kubenswrapper[4922]: I0126 15:11:21.333265 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:22 crc kubenswrapper[4922]: I0126 15:11:21.888004 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2gwq"] Jan 26 15:11:22 crc kubenswrapper[4922]: I0126 15:11:22.478725 4922 generic.go:334] "Generic (PLEG): container finished" podID="e57f87ec-2866-4694-b3f4-0907ca749e1e" containerID="5f84a680d4a85168f766dba77052b106047f9953dfac1f9b93baff039f3aa665" exitCode=0 Jan 26 15:11:22 crc kubenswrapper[4922]: I0126 15:11:22.478917 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2gwq" event={"ID":"e57f87ec-2866-4694-b3f4-0907ca749e1e","Type":"ContainerDied","Data":"5f84a680d4a85168f766dba77052b106047f9953dfac1f9b93baff039f3aa665"} Jan 26 15:11:22 crc kubenswrapper[4922]: I0126 15:11:22.479430 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2gwq" event={"ID":"e57f87ec-2866-4694-b3f4-0907ca749e1e","Type":"ContainerStarted","Data":"42df2c5f5e97da5d16e5c3ffdc65c92bc40d98e5ae4d62877bf906792a623d87"} Jan 26 15:11:27 crc kubenswrapper[4922]: I0126 15:11:27.525657 4922 generic.go:334] "Generic (PLEG): container finished" podID="e57f87ec-2866-4694-b3f4-0907ca749e1e" containerID="3324c42e203e9ab891234d42304e1e4da2ee9fc22bf0df2f4edaa7720ae9c58e" exitCode=0 Jan 26 15:11:27 crc kubenswrapper[4922]: I0126 15:11:27.525772 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2gwq" event={"ID":"e57f87ec-2866-4694-b3f4-0907ca749e1e","Type":"ContainerDied","Data":"3324c42e203e9ab891234d42304e1e4da2ee9fc22bf0df2f4edaa7720ae9c58e"} Jan 26 15:11:28 crc kubenswrapper[4922]: I0126 15:11:28.538514 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2gwq" event={"ID":"e57f87ec-2866-4694-b3f4-0907ca749e1e","Type":"ContainerStarted","Data":"5fb31685bc8591f95b881877ea7ffb916744858345b04300eff71915e808f02d"} Jan 26 15:11:28 crc kubenswrapper[4922]: I0126 15:11:28.564883 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2gwq" podStartSLOduration=2.908608452 podStartE2EDuration="8.564858282s" podCreationTimestamp="2026-01-26 15:11:20 +0000 UTC" firstStartedPulling="2026-01-26 15:11:22.48066774 +0000 UTC m=+3699.682930512" lastFinishedPulling="2026-01-26 15:11:28.13691757 +0000 UTC m=+3705.339180342" observedRunningTime="2026-01-26 15:11:28.560580365 +0000 UTC m=+3705.762843157" watchObservedRunningTime="2026-01-26 15:11:28.564858282 +0000 UTC m=+3705.767121074" Jan 26 15:11:31 crc kubenswrapper[4922]: I0126 15:11:31.334049 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:31 crc kubenswrapper[4922]: I0126 15:11:31.335773 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:31 crc kubenswrapper[4922]: I0126 15:11:31.410631 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:32 crc kubenswrapper[4922]: I0126 15:11:32.092998 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:11:32 crc kubenswrapper[4922]: E0126 15:11:32.093629 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:11:41 crc kubenswrapper[4922]: I0126 15:11:41.394835 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2gwq" Jan 26 15:11:41 crc kubenswrapper[4922]: I0126 15:11:41.471059 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2gwq"] Jan 26 15:11:41 crc kubenswrapper[4922]: I0126 15:11:41.516409 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cshmk"] Jan 26 15:11:41 crc kubenswrapper[4922]: I0126 15:11:41.516847 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cshmk" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="registry-server" containerID="cri-o://184a6c355527b1725ed00cc4cfdd2315c82887c44240e36c5cfce4542597716d" gracePeriod=2 Jan 26 15:11:41 crc kubenswrapper[4922]: I0126 15:11:41.686915 4922 generic.go:334] "Generic (PLEG): container finished" podID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerID="184a6c355527b1725ed00cc4cfdd2315c82887c44240e36c5cfce4542597716d" exitCode=0 Jan 26 15:11:41 crc kubenswrapper[4922]: I0126 15:11:41.687023 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cshmk" event={"ID":"5e368904-69fc-43c6-b5d1-9e4bfdf7e402","Type":"ContainerDied","Data":"184a6c355527b1725ed00cc4cfdd2315c82887c44240e36c5cfce4542597716d"} Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.056407 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cshmk" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.241204 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j89fs\" (UniqueName: \"kubernetes.io/projected/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-kube-api-access-j89fs\") pod \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.241377 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-utilities\") pod \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.241482 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-catalog-content\") pod \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\" (UID: \"5e368904-69fc-43c6-b5d1-9e4bfdf7e402\") " Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.246399 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-utilities" (OuterVolumeSpecName: "utilities") pod "5e368904-69fc-43c6-b5d1-9e4bfdf7e402" (UID: "5e368904-69fc-43c6-b5d1-9e4bfdf7e402"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.266656 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-kube-api-access-j89fs" (OuterVolumeSpecName: "kube-api-access-j89fs") pod "5e368904-69fc-43c6-b5d1-9e4bfdf7e402" (UID: "5e368904-69fc-43c6-b5d1-9e4bfdf7e402"). InnerVolumeSpecName "kube-api-access-j89fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.346432 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j89fs\" (UniqueName: \"kubernetes.io/projected/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-kube-api-access-j89fs\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.346466 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.363333 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e368904-69fc-43c6-b5d1-9e4bfdf7e402" (UID: "5e368904-69fc-43c6-b5d1-9e4bfdf7e402"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.452587 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e368904-69fc-43c6-b5d1-9e4bfdf7e402-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.736685 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cshmk" event={"ID":"5e368904-69fc-43c6-b5d1-9e4bfdf7e402","Type":"ContainerDied","Data":"0239e6c2ad972ba2cbf392a02638ea7d5de825a9305042c6c5cbaa6ad1996a2d"} Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.736748 4922 scope.go:117] "RemoveContainer" containerID="184a6c355527b1725ed00cc4cfdd2315c82887c44240e36c5cfce4542597716d" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.737011 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cshmk" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.806741 4922 scope.go:117] "RemoveContainer" containerID="1a51d7c0563c73292865e89929057940f04180f0457a9a4578d533dd01c1c43e" Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.832626 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cshmk"] Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.839659 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cshmk"] Jan 26 15:11:42 crc kubenswrapper[4922]: I0126 15:11:42.842661 4922 scope.go:117] "RemoveContainer" containerID="897e25104e2e37d53b1c116ac72cfd6113afaddb7e962c35f337b79cf6b99881" Jan 26 15:11:43 crc kubenswrapper[4922]: I0126 15:11:43.105396 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" path="/var/lib/kubelet/pods/5e368904-69fc-43c6-b5d1-9e4bfdf7e402/volumes" Jan 26 15:11:46 crc kubenswrapper[4922]: I0126 15:11:46.093174 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:11:46 crc kubenswrapper[4922]: E0126 15:11:46.094245 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:12:00 crc kubenswrapper[4922]: I0126 15:12:00.093277 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:12:00 crc kubenswrapper[4922]: E0126 15:12:00.094114 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:12:14 crc kubenswrapper[4922]: I0126 15:12:14.093618 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:12:14 crc kubenswrapper[4922]: E0126 15:12:14.094685 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:12:29 crc kubenswrapper[4922]: I0126 15:12:29.094328 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:12:29 crc kubenswrapper[4922]: E0126 15:12:29.095385 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:12:40 crc kubenswrapper[4922]: I0126 15:12:40.094342 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:12:40 crc kubenswrapper[4922]: E0126 15:12:40.095148 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:12:54 crc kubenswrapper[4922]: I0126 15:12:54.094099 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:12:54 crc kubenswrapper[4922]: E0126 15:12:54.095539 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:13:05 crc kubenswrapper[4922]: I0126 15:13:05.092537 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:13:05 crc kubenswrapper[4922]: E0126 15:13:05.093510 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:13:16 crc kubenswrapper[4922]: I0126 15:13:16.093162 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:13:16 crc kubenswrapper[4922]: E0126 15:13:16.094580 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.136361 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z264r"] Jan 26 15:13:29 crc kubenswrapper[4922]: E0126 15:13:29.137353 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="extract-utilities" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.137369 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="extract-utilities" Jan 26 15:13:29 crc kubenswrapper[4922]: E0126 15:13:29.137391 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="extract-content" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.137399 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="extract-content" Jan 26 15:13:29 crc kubenswrapper[4922]: E0126 15:13:29.137421 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="registry-server" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.137429 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="registry-server" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.137672 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e368904-69fc-43c6-b5d1-9e4bfdf7e402" containerName="registry-server" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.139668 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z264r"] Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.139830 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.240118 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-catalog-content\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.240252 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-utilities\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.240621 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5nq\" (UniqueName: \"kubernetes.io/projected/ae819df4-63b1-457f-a406-fc5af6100f71-kube-api-access-rj5nq\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.342690 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj5nq\" (UniqueName: \"kubernetes.io/projected/ae819df4-63b1-457f-a406-fc5af6100f71-kube-api-access-rj5nq\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.343077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-catalog-content\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.343148 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-utilities\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.343711 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-catalog-content\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.343728 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-utilities\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.368170 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj5nq\" (UniqueName: \"kubernetes.io/projected/ae819df4-63b1-457f-a406-fc5af6100f71-kube-api-access-rj5nq\") pod \"redhat-operators-z264r\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.464274 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:29 crc kubenswrapper[4922]: I0126 15:13:29.991637 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z264r"] Jan 26 15:13:30 crc kubenswrapper[4922]: I0126 15:13:30.092493 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:13:30 crc kubenswrapper[4922]: E0126 15:13:30.093256 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:13:30 crc kubenswrapper[4922]: I0126 15:13:30.807796 4922 generic.go:334] "Generic (PLEG): container finished" podID="ae819df4-63b1-457f-a406-fc5af6100f71" containerID="5d8e714cb709aaeca1e83173af5984c130a31eaad3ab54e7e3fe10310d83bad1" exitCode=0 Jan 26 15:13:30 crc kubenswrapper[4922]: I0126 15:13:30.807932 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerDied","Data":"5d8e714cb709aaeca1e83173af5984c130a31eaad3ab54e7e3fe10310d83bad1"} Jan 26 15:13:30 crc kubenswrapper[4922]: I0126 15:13:30.808178 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerStarted","Data":"3c764c0658e0ea38fc9cbc2149cdb7545f6c1dab3c7f7cecf765b84430f6847a"} Jan 26 15:13:30 crc kubenswrapper[4922]: I0126 15:13:30.811978 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:13:32 crc kubenswrapper[4922]: I0126 15:13:32.838850 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerStarted","Data":"06436d7ff47e94fd555ddfd0947e540f2dbbdf03b9c84fcacc3247900f8cabaf"} Jan 26 15:13:37 crc kubenswrapper[4922]: I0126 15:13:37.891726 4922 generic.go:334] "Generic (PLEG): container finished" podID="ae819df4-63b1-457f-a406-fc5af6100f71" containerID="06436d7ff47e94fd555ddfd0947e540f2dbbdf03b9c84fcacc3247900f8cabaf" exitCode=0 Jan 26 15:13:37 crc kubenswrapper[4922]: I0126 15:13:37.892283 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerDied","Data":"06436d7ff47e94fd555ddfd0947e540f2dbbdf03b9c84fcacc3247900f8cabaf"} Jan 26 15:13:40 crc kubenswrapper[4922]: I0126 15:13:40.929880 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerStarted","Data":"9a104119529c8cc9abb8161e61a2c9ecf3ebe5d8cc824809d6fcca241a937686"} Jan 26 15:13:40 crc kubenswrapper[4922]: I0126 15:13:40.959984 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z264r" podStartSLOduration=3.116456389 podStartE2EDuration="11.959951293s" podCreationTimestamp="2026-01-26 15:13:29 +0000 UTC" firstStartedPulling="2026-01-26 15:13:30.811746045 +0000 UTC m=+3828.014008817" lastFinishedPulling="2026-01-26 15:13:39.655240929 +0000 UTC m=+3836.857503721" observedRunningTime="2026-01-26 15:13:40.948769628 +0000 UTC m=+3838.151032420" watchObservedRunningTime="2026-01-26 15:13:40.959951293 +0000 UTC m=+3838.162214105" Jan 26 15:13:41 crc kubenswrapper[4922]: I0126 15:13:41.092881 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:13:41 crc kubenswrapper[4922]: E0126 15:13:41.093173 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:13:49 crc kubenswrapper[4922]: I0126 15:13:49.465738 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:49 crc kubenswrapper[4922]: I0126 15:13:49.466292 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:49 crc kubenswrapper[4922]: I0126 15:13:49.911742 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:50 crc kubenswrapper[4922]: I0126 15:13:50.099462 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:50 crc kubenswrapper[4922]: I0126 15:13:50.150712 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z264r"] Jan 26 15:13:52 crc kubenswrapper[4922]: I0126 15:13:52.052634 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z264r" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="registry-server" containerID="cri-o://9a104119529c8cc9abb8161e61a2c9ecf3ebe5d8cc824809d6fcca241a937686" gracePeriod=2 Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.064669 4922 generic.go:334] "Generic (PLEG): container finished" podID="ae819df4-63b1-457f-a406-fc5af6100f71" containerID="9a104119529c8cc9abb8161e61a2c9ecf3ebe5d8cc824809d6fcca241a937686" exitCode=0 Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.064752 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerDied","Data":"9a104119529c8cc9abb8161e61a2c9ecf3ebe5d8cc824809d6fcca241a937686"} Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.287546 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.381489 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj5nq\" (UniqueName: \"kubernetes.io/projected/ae819df4-63b1-457f-a406-fc5af6100f71-kube-api-access-rj5nq\") pod \"ae819df4-63b1-457f-a406-fc5af6100f71\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.381580 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-utilities\") pod \"ae819df4-63b1-457f-a406-fc5af6100f71\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.381837 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-catalog-content\") pod \"ae819df4-63b1-457f-a406-fc5af6100f71\" (UID: \"ae819df4-63b1-457f-a406-fc5af6100f71\") " Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.382519 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-utilities" (OuterVolumeSpecName: "utilities") pod "ae819df4-63b1-457f-a406-fc5af6100f71" (UID: "ae819df4-63b1-457f-a406-fc5af6100f71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.387257 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae819df4-63b1-457f-a406-fc5af6100f71-kube-api-access-rj5nq" (OuterVolumeSpecName: "kube-api-access-rj5nq") pod "ae819df4-63b1-457f-a406-fc5af6100f71" (UID: "ae819df4-63b1-457f-a406-fc5af6100f71"). InnerVolumeSpecName "kube-api-access-rj5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.484277 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj5nq\" (UniqueName: \"kubernetes.io/projected/ae819df4-63b1-457f-a406-fc5af6100f71-kube-api-access-rj5nq\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.484610 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.496405 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae819df4-63b1-457f-a406-fc5af6100f71" (UID: "ae819df4-63b1-457f-a406-fc5af6100f71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:13:53 crc kubenswrapper[4922]: I0126 15:13:53.587353 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae819df4-63b1-457f-a406-fc5af6100f71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.085227 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z264r" event={"ID":"ae819df4-63b1-457f-a406-fc5af6100f71","Type":"ContainerDied","Data":"3c764c0658e0ea38fc9cbc2149cdb7545f6c1dab3c7f7cecf765b84430f6847a"} Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.085284 4922 scope.go:117] "RemoveContainer" containerID="9a104119529c8cc9abb8161e61a2c9ecf3ebe5d8cc824809d6fcca241a937686" Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.085349 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z264r" Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.093186 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:13:54 crc kubenswrapper[4922]: E0126 15:13:54.093793 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.125931 4922 scope.go:117] "RemoveContainer" containerID="06436d7ff47e94fd555ddfd0947e540f2dbbdf03b9c84fcacc3247900f8cabaf" Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.130408 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z264r"] Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.145276 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z264r"] Jan 26 15:13:54 crc kubenswrapper[4922]: I0126 15:13:54.160630 4922 scope.go:117] "RemoveContainer" containerID="5d8e714cb709aaeca1e83173af5984c130a31eaad3ab54e7e3fe10310d83bad1" Jan 26 15:13:55 crc kubenswrapper[4922]: I0126 15:13:55.118246 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" path="/var/lib/kubelet/pods/ae819df4-63b1-457f-a406-fc5af6100f71/volumes" Jan 26 15:14:09 crc kubenswrapper[4922]: I0126 15:14:09.093394 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:14:09 crc kubenswrapper[4922]: E0126 15:14:09.094543 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:14:24 crc kubenswrapper[4922]: I0126 15:14:24.092615 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:14:24 crc kubenswrapper[4922]: E0126 15:14:24.093420 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:14:35 crc kubenswrapper[4922]: I0126 15:14:35.093278 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:14:35 crc kubenswrapper[4922]: E0126 15:14:35.094165 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:14:46 crc kubenswrapper[4922]: I0126 15:14:46.092917 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:14:46 crc kubenswrapper[4922]: E0126 15:14:46.093796 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.185526 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2"] Jan 26 15:15:00 crc kubenswrapper[4922]: E0126 15:15:00.186512 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="extract-utilities" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.186536 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="extract-utilities" Jan 26 15:15:00 crc kubenswrapper[4922]: E0126 15:15:00.186560 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="registry-server" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.186566 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="registry-server" Jan 26 15:15:00 crc kubenswrapper[4922]: E0126 15:15:00.186585 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="extract-content" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.186592 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="extract-content" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.186876 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae819df4-63b1-457f-a406-fc5af6100f71" containerName="registry-server" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.187700 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.189805 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.190636 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.206819 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2"] Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.220556 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcbf3a22-f737-472e-9364-4a03d629df67-config-volume\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.220639 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcbf3a22-f737-472e-9364-4a03d629df67-secret-volume\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.220697 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9f9\" (UniqueName: \"kubernetes.io/projected/dcbf3a22-f737-472e-9364-4a03d629df67-kube-api-access-cf9f9\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.323743 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcbf3a22-f737-472e-9364-4a03d629df67-config-volume\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.323852 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcbf3a22-f737-472e-9364-4a03d629df67-secret-volume\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.323908 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9f9\" (UniqueName: \"kubernetes.io/projected/dcbf3a22-f737-472e-9364-4a03d629df67-kube-api-access-cf9f9\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.324982 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcbf3a22-f737-472e-9364-4a03d629df67-config-volume\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.340495 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcbf3a22-f737-472e-9364-4a03d629df67-secret-volume\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.342939 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9f9\" (UniqueName: \"kubernetes.io/projected/dcbf3a22-f737-472e-9364-4a03d629df67-kube-api-access-cf9f9\") pod \"collect-profiles-29490675-wktr2\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:00 crc kubenswrapper[4922]: I0126 15:15:00.509900 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:01 crc kubenswrapper[4922]: I0126 15:15:01.002463 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2"] Jan 26 15:15:01 crc kubenswrapper[4922]: I0126 15:15:01.092051 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:15:01 crc kubenswrapper[4922]: E0126 15:15:01.092438 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:15:01 crc kubenswrapper[4922]: I0126 15:15:01.785502 4922 generic.go:334] "Generic (PLEG): container finished" podID="dcbf3a22-f737-472e-9364-4a03d629df67" containerID="c6c7ed5e7f3c8fbe07d238013bad7344902e5398376ce2f33218cfd27abef5aa" exitCode=0 Jan 26 15:15:01 crc kubenswrapper[4922]: I0126 15:15:01.785735 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" event={"ID":"dcbf3a22-f737-472e-9364-4a03d629df67","Type":"ContainerDied","Data":"c6c7ed5e7f3c8fbe07d238013bad7344902e5398376ce2f33218cfd27abef5aa"} Jan 26 15:15:01 crc kubenswrapper[4922]: I0126 15:15:01.785829 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" event={"ID":"dcbf3a22-f737-472e-9364-4a03d629df67","Type":"ContainerStarted","Data":"66840ea2e67642981e680f5fe3c56197123bd5ba72fd1ed8ad7ccd1d29bad7fb"} Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.346811 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.490138 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcbf3a22-f737-472e-9364-4a03d629df67-config-volume\") pod \"dcbf3a22-f737-472e-9364-4a03d629df67\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.490597 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcbf3a22-f737-472e-9364-4a03d629df67-secret-volume\") pod \"dcbf3a22-f737-472e-9364-4a03d629df67\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.490768 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9f9\" (UniqueName: \"kubernetes.io/projected/dcbf3a22-f737-472e-9364-4a03d629df67-kube-api-access-cf9f9\") pod \"dcbf3a22-f737-472e-9364-4a03d629df67\" (UID: \"dcbf3a22-f737-472e-9364-4a03d629df67\") " Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.490953 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbf3a22-f737-472e-9364-4a03d629df67-config-volume" (OuterVolumeSpecName: "config-volume") pod "dcbf3a22-f737-472e-9364-4a03d629df67" (UID: "dcbf3a22-f737-472e-9364-4a03d629df67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.491521 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcbf3a22-f737-472e-9364-4a03d629df67-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.517919 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbf3a22-f737-472e-9364-4a03d629df67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dcbf3a22-f737-472e-9364-4a03d629df67" (UID: "dcbf3a22-f737-472e-9364-4a03d629df67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.518299 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbf3a22-f737-472e-9364-4a03d629df67-kube-api-access-cf9f9" (OuterVolumeSpecName: "kube-api-access-cf9f9") pod "dcbf3a22-f737-472e-9364-4a03d629df67" (UID: "dcbf3a22-f737-472e-9364-4a03d629df67"). InnerVolumeSpecName "kube-api-access-cf9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.593999 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcbf3a22-f737-472e-9364-4a03d629df67-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.594042 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9f9\" (UniqueName: \"kubernetes.io/projected/dcbf3a22-f737-472e-9364-4a03d629df67-kube-api-access-cf9f9\") on node \"crc\" DevicePath \"\"" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.814788 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" event={"ID":"dcbf3a22-f737-472e-9364-4a03d629df67","Type":"ContainerDied","Data":"66840ea2e67642981e680f5fe3c56197123bd5ba72fd1ed8ad7ccd1d29bad7fb"} Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.814833 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66840ea2e67642981e680f5fe3c56197123bd5ba72fd1ed8ad7ccd1d29bad7fb" Jan 26 15:15:03 crc kubenswrapper[4922]: I0126 15:15:03.814847 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2" Jan 26 15:15:04 crc kubenswrapper[4922]: I0126 15:15:04.431630 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t"] Jan 26 15:15:04 crc kubenswrapper[4922]: I0126 15:15:04.441845 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490630-n988t"] Jan 26 15:15:05 crc kubenswrapper[4922]: I0126 15:15:05.110119 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e391b4-ed5e-4fb3-828e-9b9f06d55b6b" path="/var/lib/kubelet/pods/22e391b4-ed5e-4fb3-828e-9b9f06d55b6b/volumes" Jan 26 15:15:13 crc kubenswrapper[4922]: I0126 15:15:13.101306 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:15:13 crc kubenswrapper[4922]: I0126 15:15:13.946448 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"7e3b3a49784924810b3dbab65b7096624952e5d47d5f0814866e3fe2d5725527"} Jan 26 15:15:19 crc kubenswrapper[4922]: I0126 15:15:19.607939 4922 scope.go:117] "RemoveContainer" containerID="46caf6b90014d6bf9203f6151a4c9713a97eb0cdfaa588b6839d966e581805db" Jan 26 15:17:41 crc kubenswrapper[4922]: I0126 15:17:41.306545 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:17:41 crc kubenswrapper[4922]: I0126 15:17:41.307320 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:18:11 crc kubenswrapper[4922]: I0126 15:18:11.307430 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:18:11 crc kubenswrapper[4922]: I0126 15:18:11.308110 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.763392 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4zskg"] Jan 26 15:18:34 crc kubenswrapper[4922]: E0126 15:18:34.764427 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbf3a22-f737-472e-9364-4a03d629df67" containerName="collect-profiles" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.764443 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbf3a22-f737-472e-9364-4a03d629df67" containerName="collect-profiles" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.764709 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbf3a22-f737-472e-9364-4a03d629df67" containerName="collect-profiles" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.766486 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.777852 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zskg"] Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.840008 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2g6\" (UniqueName: \"kubernetes.io/projected/204a737c-fffb-48eb-ba85-fccae7022f47-kube-api-access-zk2g6\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.840104 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-utilities\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.840166 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-catalog-content\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.941820 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-utilities\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.941885 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-catalog-content\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.941996 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2g6\" (UniqueName: \"kubernetes.io/projected/204a737c-fffb-48eb-ba85-fccae7022f47-kube-api-access-zk2g6\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.942282 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-utilities\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.942370 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-catalog-content\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:34 crc kubenswrapper[4922]: I0126 15:18:34.961683 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2g6\" (UniqueName: \"kubernetes.io/projected/204a737c-fffb-48eb-ba85-fccae7022f47-kube-api-access-zk2g6\") pod \"redhat-marketplace-4zskg\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:35 crc kubenswrapper[4922]: I0126 15:18:35.095522 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:35 crc kubenswrapper[4922]: I0126 15:18:35.580283 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zskg"] Jan 26 15:18:36 crc kubenswrapper[4922]: I0126 15:18:36.135962 4922 generic.go:334] "Generic (PLEG): container finished" podID="204a737c-fffb-48eb-ba85-fccae7022f47" containerID="e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12" exitCode=0 Jan 26 15:18:36 crc kubenswrapper[4922]: I0126 15:18:36.136009 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerDied","Data":"e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12"} Jan 26 15:18:36 crc kubenswrapper[4922]: I0126 15:18:36.136039 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerStarted","Data":"6a3b01a6024133fa52e32a867f44ed4be292b41ff887ee70896da9b8c6e9a919"} Jan 26 15:18:36 crc kubenswrapper[4922]: I0126 15:18:36.137980 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:18:38 crc kubenswrapper[4922]: I0126 15:18:38.167143 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerStarted","Data":"fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763"} Jan 26 15:18:38 crc kubenswrapper[4922]: E0126 15:18:38.361279 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod204a737c_fffb_48eb_ba85_fccae7022f47.slice/crio-conmon-fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:18:39 crc kubenswrapper[4922]: I0126 15:18:39.180203 4922 generic.go:334] "Generic (PLEG): container finished" podID="204a737c-fffb-48eb-ba85-fccae7022f47" containerID="fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763" exitCode=0 Jan 26 15:18:39 crc kubenswrapper[4922]: I0126 15:18:39.180557 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerDied","Data":"fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763"} Jan 26 15:18:39 crc kubenswrapper[4922]: I0126 15:18:39.180589 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerStarted","Data":"d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb"} Jan 26 15:18:39 crc kubenswrapper[4922]: I0126 15:18:39.203435 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4zskg" podStartSLOduration=2.545299095 podStartE2EDuration="5.203415935s" podCreationTimestamp="2026-01-26 15:18:34 +0000 UTC" firstStartedPulling="2026-01-26 15:18:36.137725891 +0000 UTC m=+4133.339988663" lastFinishedPulling="2026-01-26 15:18:38.795842731 +0000 UTC m=+4135.998105503" observedRunningTime="2026-01-26 15:18:39.200218988 +0000 UTC m=+4136.402481770" watchObservedRunningTime="2026-01-26 15:18:39.203415935 +0000 UTC m=+4136.405678717" Jan 26 15:18:41 crc kubenswrapper[4922]: I0126 15:18:41.306588 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:18:41 crc kubenswrapper[4922]: I0126 15:18:41.306882 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:18:41 crc kubenswrapper[4922]: I0126 15:18:41.306919 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:18:41 crc kubenswrapper[4922]: I0126 15:18:41.307617 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e3b3a49784924810b3dbab65b7096624952e5d47d5f0814866e3fe2d5725527"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:18:41 crc kubenswrapper[4922]: I0126 15:18:41.307672 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://7e3b3a49784924810b3dbab65b7096624952e5d47d5f0814866e3fe2d5725527" gracePeriod=600 Jan 26 15:18:42 crc kubenswrapper[4922]: I0126 15:18:42.213556 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="7e3b3a49784924810b3dbab65b7096624952e5d47d5f0814866e3fe2d5725527" exitCode=0 Jan 26 15:18:42 crc kubenswrapper[4922]: I0126 15:18:42.213651 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"7e3b3a49784924810b3dbab65b7096624952e5d47d5f0814866e3fe2d5725527"} Jan 26 15:18:42 crc kubenswrapper[4922]: I0126 15:18:42.214273 4922 scope.go:117] "RemoveContainer" containerID="5708accd13171ef48c7ac4520209c5afc26d70ae4d3c3161b18bb1771b1351bd" Jan 26 15:18:43 crc kubenswrapper[4922]: I0126 15:18:43.227166 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f"} Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.576250 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmwkw"] Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.580516 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.589383 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmwkw"] Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.751603 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8282d89-3d21-4ad5-b707-f00019cc6e70-utilities\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.751668 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jwzk\" (UniqueName: \"kubernetes.io/projected/c8282d89-3d21-4ad5-b707-f00019cc6e70-kube-api-access-4jwzk\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.751758 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8282d89-3d21-4ad5-b707-f00019cc6e70-catalog-content\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.853453 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8282d89-3d21-4ad5-b707-f00019cc6e70-utilities\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.853810 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jwzk\" (UniqueName: \"kubernetes.io/projected/c8282d89-3d21-4ad5-b707-f00019cc6e70-kube-api-access-4jwzk\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.854141 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8282d89-3d21-4ad5-b707-f00019cc6e70-catalog-content\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.854255 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8282d89-3d21-4ad5-b707-f00019cc6e70-utilities\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.854712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8282d89-3d21-4ad5-b707-f00019cc6e70-catalog-content\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.875601 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jwzk\" (UniqueName: \"kubernetes.io/projected/c8282d89-3d21-4ad5-b707-f00019cc6e70-kube-api-access-4jwzk\") pod \"certified-operators-xmwkw\" (UID: \"c8282d89-3d21-4ad5-b707-f00019cc6e70\") " pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:44 crc kubenswrapper[4922]: I0126 15:18:44.951006 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:45 crc kubenswrapper[4922]: I0126 15:18:45.109707 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:45 crc kubenswrapper[4922]: I0126 15:18:45.110150 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:45 crc kubenswrapper[4922]: I0126 15:18:45.226759 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:45 crc kubenswrapper[4922]: I0126 15:18:45.316917 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:45 crc kubenswrapper[4922]: I0126 15:18:45.494550 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmwkw"] Jan 26 15:18:45 crc kubenswrapper[4922]: W0126 15:18:45.502251 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8282d89_3d21_4ad5_b707_f00019cc6e70.slice/crio-08b6368abc46285e760dc854a1f22b97428d6e5e664018b8476865a032da6cac WatchSource:0}: Error finding container 08b6368abc46285e760dc854a1f22b97428d6e5e664018b8476865a032da6cac: Status 404 returned error can't find the container with id 08b6368abc46285e760dc854a1f22b97428d6e5e664018b8476865a032da6cac Jan 26 15:18:46 crc kubenswrapper[4922]: I0126 15:18:46.260459 4922 generic.go:334] "Generic (PLEG): container finished" podID="c8282d89-3d21-4ad5-b707-f00019cc6e70" containerID="e59cd92459014ad905eb6c7d6ab4d929560eea7cc6f1c59032af6d67b62c0d13" exitCode=0 Jan 26 15:18:46 crc kubenswrapper[4922]: I0126 15:18:46.260525 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmwkw" event={"ID":"c8282d89-3d21-4ad5-b707-f00019cc6e70","Type":"ContainerDied","Data":"e59cd92459014ad905eb6c7d6ab4d929560eea7cc6f1c59032af6d67b62c0d13"} Jan 26 15:18:46 crc kubenswrapper[4922]: I0126 15:18:46.260962 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmwkw" event={"ID":"c8282d89-3d21-4ad5-b707-f00019cc6e70","Type":"ContainerStarted","Data":"08b6368abc46285e760dc854a1f22b97428d6e5e664018b8476865a032da6cac"} Jan 26 15:18:47 crc kubenswrapper[4922]: I0126 15:18:47.551495 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zskg"] Jan 26 15:18:47 crc kubenswrapper[4922]: I0126 15:18:47.552249 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4zskg" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="registry-server" containerID="cri-o://d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb" gracePeriod=2 Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.082595 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.234842 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2g6\" (UniqueName: \"kubernetes.io/projected/204a737c-fffb-48eb-ba85-fccae7022f47-kube-api-access-zk2g6\") pod \"204a737c-fffb-48eb-ba85-fccae7022f47\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.234923 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-catalog-content\") pod \"204a737c-fffb-48eb-ba85-fccae7022f47\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.255768 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-utilities\") pod \"204a737c-fffb-48eb-ba85-fccae7022f47\" (UID: \"204a737c-fffb-48eb-ba85-fccae7022f47\") " Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.256415 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-utilities" (OuterVolumeSpecName: "utilities") pod "204a737c-fffb-48eb-ba85-fccae7022f47" (UID: "204a737c-fffb-48eb-ba85-fccae7022f47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.257252 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.259509 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204a737c-fffb-48eb-ba85-fccae7022f47-kube-api-access-zk2g6" (OuterVolumeSpecName: "kube-api-access-zk2g6") pod "204a737c-fffb-48eb-ba85-fccae7022f47" (UID: "204a737c-fffb-48eb-ba85-fccae7022f47"). InnerVolumeSpecName "kube-api-access-zk2g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.270387 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "204a737c-fffb-48eb-ba85-fccae7022f47" (UID: "204a737c-fffb-48eb-ba85-fccae7022f47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.284455 4922 generic.go:334] "Generic (PLEG): container finished" podID="204a737c-fffb-48eb-ba85-fccae7022f47" containerID="d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb" exitCode=0 Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.284504 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerDied","Data":"d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb"} Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.284560 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zskg" event={"ID":"204a737c-fffb-48eb-ba85-fccae7022f47","Type":"ContainerDied","Data":"6a3b01a6024133fa52e32a867f44ed4be292b41ff887ee70896da9b8c6e9a919"} Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.284557 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zskg" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.284641 4922 scope.go:117] "RemoveContainer" containerID="d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.304762 4922 scope.go:117] "RemoveContainer" containerID="fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.327754 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zskg"] Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.336480 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zskg"] Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.358570 4922 scope.go:117] "RemoveContainer" containerID="e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.361208 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk2g6\" (UniqueName: \"kubernetes.io/projected/204a737c-fffb-48eb-ba85-fccae7022f47-kube-api-access-zk2g6\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.361311 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204a737c-fffb-48eb-ba85-fccae7022f47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.394352 4922 scope.go:117] "RemoveContainer" containerID="d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb" Jan 26 15:18:48 crc kubenswrapper[4922]: E0126 15:18:48.394951 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb\": container with ID starting with d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb not found: ID does not exist" containerID="d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.394982 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb"} err="failed to get container status \"d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb\": rpc error: code = NotFound desc = could not find container \"d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb\": container with ID starting with d8f9dece796d2f193b200fa4547df1aa3662ee55ce9cc206e64c32821a2baadb not found: ID does not exist" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.395003 4922 scope.go:117] "RemoveContainer" containerID="fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763" Jan 26 15:18:48 crc kubenswrapper[4922]: E0126 15:18:48.395507 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763\": container with ID starting with fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763 not found: ID does not exist" containerID="fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.395552 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763"} err="failed to get container status \"fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763\": rpc error: code = NotFound desc = could not find container \"fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763\": container with ID starting with fdd1a5132ba7fa463c6945b147fedf43222e622e99b438692833dfd2cea01763 not found: ID does not exist" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.395580 4922 scope.go:117] "RemoveContainer" containerID="e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12" Jan 26 15:18:48 crc kubenswrapper[4922]: E0126 15:18:48.396016 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12\": container with ID starting with e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12 not found: ID does not exist" containerID="e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12" Jan 26 15:18:48 crc kubenswrapper[4922]: I0126 15:18:48.396047 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12"} err="failed to get container status \"e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12\": rpc error: code = NotFound desc = could not find container \"e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12\": container with ID starting with e9f38eb5eeed6176da80ce32622632b123a0447ffd2ea05774c33cbfcc276c12 not found: ID does not exist" Jan 26 15:18:49 crc kubenswrapper[4922]: I0126 15:18:49.105958 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" path="/var/lib/kubelet/pods/204a737c-fffb-48eb-ba85-fccae7022f47/volumes" Jan 26 15:18:51 crc kubenswrapper[4922]: I0126 15:18:51.324393 4922 generic.go:334] "Generic (PLEG): container finished" podID="c8282d89-3d21-4ad5-b707-f00019cc6e70" containerID="5019d5b359b7bd431db15faa4ac0bdb7ccdbc91e45e48c0ebd4b23170b16c09a" exitCode=0 Jan 26 15:18:51 crc kubenswrapper[4922]: I0126 15:18:51.324500 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmwkw" event={"ID":"c8282d89-3d21-4ad5-b707-f00019cc6e70","Type":"ContainerDied","Data":"5019d5b359b7bd431db15faa4ac0bdb7ccdbc91e45e48c0ebd4b23170b16c09a"} Jan 26 15:18:52 crc kubenswrapper[4922]: I0126 15:18:52.360213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmwkw" event={"ID":"c8282d89-3d21-4ad5-b707-f00019cc6e70","Type":"ContainerStarted","Data":"929cd938b03dd828eeac62e87937ab01ad51ced48f1131f5b125c3b40ae46fe8"} Jan 26 15:18:52 crc kubenswrapper[4922]: I0126 15:18:52.407999 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmwkw" podStartSLOduration=2.806531542 podStartE2EDuration="8.407978694s" podCreationTimestamp="2026-01-26 15:18:44 +0000 UTC" firstStartedPulling="2026-01-26 15:18:46.262868229 +0000 UTC m=+4143.465131011" lastFinishedPulling="2026-01-26 15:18:51.864315391 +0000 UTC m=+4149.066578163" observedRunningTime="2026-01-26 15:18:52.399196954 +0000 UTC m=+4149.601459736" watchObservedRunningTime="2026-01-26 15:18:52.407978694 +0000 UTC m=+4149.610241466" Jan 26 15:18:54 crc kubenswrapper[4922]: I0126 15:18:54.951420 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:54 crc kubenswrapper[4922]: I0126 15:18:54.952030 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:18:55 crc kubenswrapper[4922]: I0126 15:18:55.015723 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:19:05 crc kubenswrapper[4922]: I0126 15:19:05.541749 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmwkw" Jan 26 15:19:05 crc kubenswrapper[4922]: I0126 15:19:05.666112 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmwkw"] Jan 26 15:19:05 crc kubenswrapper[4922]: I0126 15:19:05.718569 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfx9p"] Jan 26 15:19:05 crc kubenswrapper[4922]: I0126 15:19:05.718955 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfx9p" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="registry-server" containerID="cri-o://b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025" gracePeriod=2 Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.247756 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.366882 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtrn2\" (UniqueName: \"kubernetes.io/projected/8da77431-c903-448f-892c-89371c3092d4-kube-api-access-qtrn2\") pod \"8da77431-c903-448f-892c-89371c3092d4\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.367048 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-utilities\") pod \"8da77431-c903-448f-892c-89371c3092d4\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.367253 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-catalog-content\") pod \"8da77431-c903-448f-892c-89371c3092d4\" (UID: \"8da77431-c903-448f-892c-89371c3092d4\") " Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.369172 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-utilities" (OuterVolumeSpecName: "utilities") pod "8da77431-c903-448f-892c-89371c3092d4" (UID: "8da77431-c903-448f-892c-89371c3092d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.375015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da77431-c903-448f-892c-89371c3092d4-kube-api-access-qtrn2" (OuterVolumeSpecName: "kube-api-access-qtrn2") pod "8da77431-c903-448f-892c-89371c3092d4" (UID: "8da77431-c903-448f-892c-89371c3092d4"). InnerVolumeSpecName "kube-api-access-qtrn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.430271 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8da77431-c903-448f-892c-89371c3092d4" (UID: "8da77431-c903-448f-892c-89371c3092d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.470312 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.470341 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtrn2\" (UniqueName: \"kubernetes.io/projected/8da77431-c903-448f-892c-89371c3092d4-kube-api-access-qtrn2\") on node \"crc\" DevicePath \"\"" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.470352 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da77431-c903-448f-892c-89371c3092d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.520009 4922 generic.go:334] "Generic (PLEG): container finished" podID="8da77431-c903-448f-892c-89371c3092d4" containerID="b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025" exitCode=0 Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.520108 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfx9p" event={"ID":"8da77431-c903-448f-892c-89371c3092d4","Type":"ContainerDied","Data":"b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025"} Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.520127 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfx9p" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.520183 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfx9p" event={"ID":"8da77431-c903-448f-892c-89371c3092d4","Type":"ContainerDied","Data":"7beaef39d64165b006259462937a5d69ac4e07a335a98a1e2faf31528c1a1d3a"} Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.520212 4922 scope.go:117] "RemoveContainer" containerID="b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.542111 4922 scope.go:117] "RemoveContainer" containerID="91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.567253 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfx9p"] Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.568524 4922 scope.go:117] "RemoveContainer" containerID="0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.575518 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfx9p"] Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.627006 4922 scope.go:117] "RemoveContainer" containerID="b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025" Jan 26 15:19:06 crc kubenswrapper[4922]: E0126 15:19:06.627454 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025\": container with ID starting with b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025 not found: ID does not exist" containerID="b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.627504 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025"} err="failed to get container status \"b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025\": rpc error: code = NotFound desc = could not find container \"b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025\": container with ID starting with b83f63917b35cae08b653b70f8e65ec9ca332f0524a51b2baaad11396e063025 not found: ID does not exist" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.627533 4922 scope.go:117] "RemoveContainer" containerID="91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70" Jan 26 15:19:06 crc kubenswrapper[4922]: E0126 15:19:06.627854 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70\": container with ID starting with 91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70 not found: ID does not exist" containerID="91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.627885 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70"} err="failed to get container status \"91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70\": rpc error: code = NotFound desc = could not find container \"91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70\": container with ID starting with 91a2f258cc97152a370c01658406a19b9525ee3cfbeee7dcf5eb02f161fb0c70 not found: ID does not exist" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.627908 4922 scope.go:117] "RemoveContainer" containerID="0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2" Jan 26 15:19:06 crc kubenswrapper[4922]: E0126 15:19:06.628161 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2\": container with ID starting with 0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2 not found: ID does not exist" containerID="0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2" Jan 26 15:19:06 crc kubenswrapper[4922]: I0126 15:19:06.628191 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2"} err="failed to get container status \"0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2\": rpc error: code = NotFound desc = could not find container \"0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2\": container with ID starting with 0ae3502275fc4087e0a19da5a1870122d36dff03e6e40ad15c68dcea19bcd0d2 not found: ID does not exist" Jan 26 15:19:07 crc kubenswrapper[4922]: I0126 15:19:07.103647 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da77431-c903-448f-892c-89371c3092d4" path="/var/lib/kubelet/pods/8da77431-c903-448f-892c-89371c3092d4/volumes" Jan 26 15:21:11 crc kubenswrapper[4922]: I0126 15:21:11.316393 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:21:11 crc kubenswrapper[4922]: I0126 15:21:11.318335 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.645552 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5mdd"] Jan 26 15:21:32 crc kubenswrapper[4922]: E0126 15:21:32.646534 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="extract-utilities" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646550 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="extract-utilities" Jan 26 15:21:32 crc kubenswrapper[4922]: E0126 15:21:32.646570 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="registry-server" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646577 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="registry-server" Jan 26 15:21:32 crc kubenswrapper[4922]: E0126 15:21:32.646595 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="extract-content" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646602 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="extract-content" Jan 26 15:21:32 crc kubenswrapper[4922]: E0126 15:21:32.646625 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="extract-utilities" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646632 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="extract-utilities" Jan 26 15:21:32 crc kubenswrapper[4922]: E0126 15:21:32.646648 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="registry-server" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646653 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="registry-server" Jan 26 15:21:32 crc kubenswrapper[4922]: E0126 15:21:32.646666 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="extract-content" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646671 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="extract-content" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646905 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da77431-c903-448f-892c-89371c3092d4" containerName="registry-server" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.646922 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="204a737c-fffb-48eb-ba85-fccae7022f47" containerName="registry-server" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.648564 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.662364 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5mdd"] Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.742800 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-catalog-content\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.742962 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-utilities\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.743039 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xgh\" (UniqueName: \"kubernetes.io/projected/6ad24d77-ae14-4421-a621-0afb78fd4a7c-kube-api-access-h9xgh\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.845549 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-catalog-content\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.845688 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-utilities\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.845748 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9xgh\" (UniqueName: \"kubernetes.io/projected/6ad24d77-ae14-4421-a621-0afb78fd4a7c-kube-api-access-h9xgh\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.846095 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-catalog-content\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.846194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-utilities\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.865100 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9xgh\" (UniqueName: \"kubernetes.io/projected/6ad24d77-ae14-4421-a621-0afb78fd4a7c-kube-api-access-h9xgh\") pod \"community-operators-h5mdd\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:32 crc kubenswrapper[4922]: I0126 15:21:32.971174 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:33 crc kubenswrapper[4922]: I0126 15:21:33.518445 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5mdd"] Jan 26 15:21:34 crc kubenswrapper[4922]: I0126 15:21:34.425785 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerID="17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915" exitCode=0 Jan 26 15:21:34 crc kubenswrapper[4922]: I0126 15:21:34.425837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5mdd" event={"ID":"6ad24d77-ae14-4421-a621-0afb78fd4a7c","Type":"ContainerDied","Data":"17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915"} Jan 26 15:21:34 crc kubenswrapper[4922]: I0126 15:21:34.429266 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5mdd" event={"ID":"6ad24d77-ae14-4421-a621-0afb78fd4a7c","Type":"ContainerStarted","Data":"66e843094329b002b62477ea668a6754b4fce7060ceaed9fb69599c698cb63df"} Jan 26 15:21:37 crc kubenswrapper[4922]: I0126 15:21:37.459823 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerID="28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521" exitCode=0 Jan 26 15:21:37 crc kubenswrapper[4922]: I0126 15:21:37.460033 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5mdd" event={"ID":"6ad24d77-ae14-4421-a621-0afb78fd4a7c","Type":"ContainerDied","Data":"28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521"} Jan 26 15:21:38 crc kubenswrapper[4922]: I0126 15:21:38.479746 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5mdd" event={"ID":"6ad24d77-ae14-4421-a621-0afb78fd4a7c","Type":"ContainerStarted","Data":"24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33"} Jan 26 15:21:38 crc kubenswrapper[4922]: I0126 15:21:38.514925 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5mdd" podStartSLOduration=3.037789171 podStartE2EDuration="6.514899576s" podCreationTimestamp="2026-01-26 15:21:32 +0000 UTC" firstStartedPulling="2026-01-26 15:21:34.427843708 +0000 UTC m=+4311.630106500" lastFinishedPulling="2026-01-26 15:21:37.904954143 +0000 UTC m=+4315.107216905" observedRunningTime="2026-01-26 15:21:38.499475376 +0000 UTC m=+4315.701738158" watchObservedRunningTime="2026-01-26 15:21:38.514899576 +0000 UTC m=+4315.717162388" Jan 26 15:21:41 crc kubenswrapper[4922]: I0126 15:21:41.306690 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:21:41 crc kubenswrapper[4922]: I0126 15:21:41.307024 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:21:42 crc kubenswrapper[4922]: I0126 15:21:42.971434 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:42 crc kubenswrapper[4922]: I0126 15:21:42.971918 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:43 crc kubenswrapper[4922]: I0126 15:21:43.033239 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:43 crc kubenswrapper[4922]: I0126 15:21:43.578193 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:43 crc kubenswrapper[4922]: I0126 15:21:43.644147 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5mdd"] Jan 26 15:21:45 crc kubenswrapper[4922]: I0126 15:21:45.541755 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5mdd" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="registry-server" containerID="cri-o://24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33" gracePeriod=2 Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.038821 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.129701 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-catalog-content\") pod \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.129861 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xgh\" (UniqueName: \"kubernetes.io/projected/6ad24d77-ae14-4421-a621-0afb78fd4a7c-kube-api-access-h9xgh\") pod \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.129895 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-utilities\") pod \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\" (UID: \"6ad24d77-ae14-4421-a621-0afb78fd4a7c\") " Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.130698 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-utilities" (OuterVolumeSpecName: "utilities") pod "6ad24d77-ae14-4421-a621-0afb78fd4a7c" (UID: "6ad24d77-ae14-4421-a621-0afb78fd4a7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.136506 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad24d77-ae14-4421-a621-0afb78fd4a7c-kube-api-access-h9xgh" (OuterVolumeSpecName: "kube-api-access-h9xgh") pod "6ad24d77-ae14-4421-a621-0afb78fd4a7c" (UID: "6ad24d77-ae14-4421-a621-0afb78fd4a7c"). InnerVolumeSpecName "kube-api-access-h9xgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.189732 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ad24d77-ae14-4421-a621-0afb78fd4a7c" (UID: "6ad24d77-ae14-4421-a621-0afb78fd4a7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.233058 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9xgh\" (UniqueName: \"kubernetes.io/projected/6ad24d77-ae14-4421-a621-0afb78fd4a7c-kube-api-access-h9xgh\") on node \"crc\" DevicePath \"\"" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.233099 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.233110 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ad24d77-ae14-4421-a621-0afb78fd4a7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.570086 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerID="24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33" exitCode=0 Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.570395 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5mdd" event={"ID":"6ad24d77-ae14-4421-a621-0afb78fd4a7c","Type":"ContainerDied","Data":"24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33"} Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.570436 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5mdd" event={"ID":"6ad24d77-ae14-4421-a621-0afb78fd4a7c","Type":"ContainerDied","Data":"66e843094329b002b62477ea668a6754b4fce7060ceaed9fb69599c698cb63df"} Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.570483 4922 scope.go:117] "RemoveContainer" containerID="24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.570720 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5mdd" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.593112 4922 scope.go:117] "RemoveContainer" containerID="28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.618885 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5mdd"] Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.631315 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5mdd"] Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.647683 4922 scope.go:117] "RemoveContainer" containerID="17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.674485 4922 scope.go:117] "RemoveContainer" containerID="24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33" Jan 26 15:21:46 crc kubenswrapper[4922]: E0126 15:21:46.674911 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33\": container with ID starting with 24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33 not found: ID does not exist" containerID="24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.674967 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33"} err="failed to get container status \"24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33\": rpc error: code = NotFound desc = could not find container \"24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33\": container with ID starting with 24808618a242d1724e2ee2735bb9791d0cb880c440f0eb5f14b97a9281784c33 not found: ID does not exist" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.675008 4922 scope.go:117] "RemoveContainer" containerID="28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521" Jan 26 15:21:46 crc kubenswrapper[4922]: E0126 15:21:46.683501 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521\": container with ID starting with 28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521 not found: ID does not exist" containerID="28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.683538 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521"} err="failed to get container status \"28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521\": rpc error: code = NotFound desc = could not find container \"28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521\": container with ID starting with 28f4cb71a68bbf12ddc4509780d0653fbff03d45568c8630369a8176bf488521 not found: ID does not exist" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.683563 4922 scope.go:117] "RemoveContainer" containerID="17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915" Jan 26 15:21:46 crc kubenswrapper[4922]: E0126 15:21:46.683901 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915\": container with ID starting with 17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915 not found: ID does not exist" containerID="17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915" Jan 26 15:21:46 crc kubenswrapper[4922]: I0126 15:21:46.683956 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915"} err="failed to get container status \"17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915\": rpc error: code = NotFound desc = could not find container \"17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915\": container with ID starting with 17466a9cfbfc2154aa4cf93faea933bcd0b58ac39605e599de5102c2738e9915 not found: ID does not exist" Jan 26 15:21:47 crc kubenswrapper[4922]: I0126 15:21:47.107765 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" path="/var/lib/kubelet/pods/6ad24d77-ae14-4421-a621-0afb78fd4a7c/volumes" Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.306710 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.307249 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.307302 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.308155 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.308221 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" gracePeriod=600 Jan 26 15:22:11 crc kubenswrapper[4922]: E0126 15:22:11.434505 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.813028 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" exitCode=0 Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.813107 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f"} Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.813390 4922 scope.go:117] "RemoveContainer" containerID="7e3b3a49784924810b3dbab65b7096624952e5d47d5f0814866e3fe2d5725527" Jan 26 15:22:11 crc kubenswrapper[4922]: I0126 15:22:11.815439 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:22:11 crc kubenswrapper[4922]: E0126 15:22:11.816253 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:22:27 crc kubenswrapper[4922]: I0126 15:22:27.092577 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:22:27 crc kubenswrapper[4922]: E0126 15:22:27.093699 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:22:42 crc kubenswrapper[4922]: I0126 15:22:42.093125 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:22:42 crc kubenswrapper[4922]: E0126 15:22:42.094414 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:22:53 crc kubenswrapper[4922]: I0126 15:22:53.100917 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:22:53 crc kubenswrapper[4922]: E0126 15:22:53.101891 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:23:04 crc kubenswrapper[4922]: I0126 15:23:04.093995 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:23:04 crc kubenswrapper[4922]: E0126 15:23:04.095123 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:23:15 crc kubenswrapper[4922]: I0126 15:23:15.092230 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:23:15 crc kubenswrapper[4922]: E0126 15:23:15.092994 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:23:29 crc kubenswrapper[4922]: I0126 15:23:29.094198 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:23:29 crc kubenswrapper[4922]: E0126 15:23:29.095653 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:23:42 crc kubenswrapper[4922]: I0126 15:23:42.093677 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:23:42 crc kubenswrapper[4922]: E0126 15:23:42.094813 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:23:53 crc kubenswrapper[4922]: I0126 15:23:53.106288 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:23:53 crc kubenswrapper[4922]: E0126 15:23:53.107178 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:24:08 crc kubenswrapper[4922]: I0126 15:24:08.092875 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:24:08 crc kubenswrapper[4922]: E0126 15:24:08.093831 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:24:22 crc kubenswrapper[4922]: I0126 15:24:22.092695 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:24:22 crc kubenswrapper[4922]: E0126 15:24:22.093950 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:24:33 crc kubenswrapper[4922]: I0126 15:24:33.100025 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:24:33 crc kubenswrapper[4922]: E0126 15:24:33.100749 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:24:47 crc kubenswrapper[4922]: I0126 15:24:47.092900 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:24:47 crc kubenswrapper[4922]: E0126 15:24:47.093983 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:25:00 crc kubenswrapper[4922]: I0126 15:25:00.092901 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:25:00 crc kubenswrapper[4922]: E0126 15:25:00.093663 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:25:11 crc kubenswrapper[4922]: I0126 15:25:11.092897 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:25:11 crc kubenswrapper[4922]: E0126 15:25:11.093780 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:25:26 crc kubenswrapper[4922]: I0126 15:25:26.092883 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:25:26 crc kubenswrapper[4922]: E0126 15:25:26.093667 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:25:40 crc kubenswrapper[4922]: I0126 15:25:40.092562 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:25:40 crc kubenswrapper[4922]: E0126 15:25:40.093353 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:25:53 crc kubenswrapper[4922]: I0126 15:25:53.098721 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:25:53 crc kubenswrapper[4922]: E0126 15:25:53.099544 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.092981 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:26:07 crc kubenswrapper[4922]: E0126 15:26:07.093986 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.932585 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-79nnr"] Jan 26 15:26:07 crc kubenswrapper[4922]: E0126 15:26:07.933311 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="registry-server" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.933324 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="registry-server" Jan 26 15:26:07 crc kubenswrapper[4922]: E0126 15:26:07.933335 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="extract-content" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.933341 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="extract-content" Jan 26 15:26:07 crc kubenswrapper[4922]: E0126 15:26:07.933358 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="extract-utilities" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.933364 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="extract-utilities" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.933560 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad24d77-ae14-4421-a621-0afb78fd4a7c" containerName="registry-server" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.935919 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:07 crc kubenswrapper[4922]: I0126 15:26:07.949787 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79nnr"] Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.044770 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7drr\" (UniqueName: \"kubernetes.io/projected/530d4639-cc1a-4f9e-adcb-9075e846cd75-kube-api-access-n7drr\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.044991 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-catalog-content\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.045250 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-utilities\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.147167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7drr\" (UniqueName: \"kubernetes.io/projected/530d4639-cc1a-4f9e-adcb-9075e846cd75-kube-api-access-n7drr\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.148824 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-catalog-content\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.149135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-utilities\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.149477 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-catalog-content\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.149848 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-utilities\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.167816 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7drr\" (UniqueName: \"kubernetes.io/projected/530d4639-cc1a-4f9e-adcb-9075e846cd75-kube-api-access-n7drr\") pod \"redhat-operators-79nnr\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.285025 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:08 crc kubenswrapper[4922]: I0126 15:26:08.809408 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79nnr"] Jan 26 15:26:09 crc kubenswrapper[4922]: I0126 15:26:09.231449 4922 generic.go:334] "Generic (PLEG): container finished" podID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerID="11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c" exitCode=0 Jan 26 15:26:09 crc kubenswrapper[4922]: I0126 15:26:09.231496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerDied","Data":"11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c"} Jan 26 15:26:09 crc kubenswrapper[4922]: I0126 15:26:09.231526 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerStarted","Data":"45c036f9e8e358f3c29a2f94e257c5ab58d7b883669f4c678f45cc265c509693"} Jan 26 15:26:09 crc kubenswrapper[4922]: I0126 15:26:09.233644 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:26:10 crc kubenswrapper[4922]: I0126 15:26:10.246617 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerStarted","Data":"71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0"} Jan 26 15:26:14 crc kubenswrapper[4922]: I0126 15:26:14.281727 4922 generic.go:334] "Generic (PLEG): container finished" podID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerID="71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0" exitCode=0 Jan 26 15:26:14 crc kubenswrapper[4922]: I0126 15:26:14.281803 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerDied","Data":"71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0"} Jan 26 15:26:15 crc kubenswrapper[4922]: I0126 15:26:15.294216 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerStarted","Data":"4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c"} Jan 26 15:26:15 crc kubenswrapper[4922]: I0126 15:26:15.317542 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-79nnr" podStartSLOduration=2.841362338 podStartE2EDuration="8.317524819s" podCreationTimestamp="2026-01-26 15:26:07 +0000 UTC" firstStartedPulling="2026-01-26 15:26:09.23341703 +0000 UTC m=+4586.435679802" lastFinishedPulling="2026-01-26 15:26:14.709579511 +0000 UTC m=+4591.911842283" observedRunningTime="2026-01-26 15:26:15.311896406 +0000 UTC m=+4592.514159198" watchObservedRunningTime="2026-01-26 15:26:15.317524819 +0000 UTC m=+4592.519787581" Jan 26 15:26:18 crc kubenswrapper[4922]: I0126 15:26:18.092750 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:26:18 crc kubenswrapper[4922]: E0126 15:26:18.093642 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:26:18 crc kubenswrapper[4922]: I0126 15:26:18.285430 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:18 crc kubenswrapper[4922]: I0126 15:26:18.287194 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:19 crc kubenswrapper[4922]: I0126 15:26:19.509219 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-79nnr" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="registry-server" probeResult="failure" output=< Jan 26 15:26:19 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 15:26:19 crc kubenswrapper[4922]: > Jan 26 15:26:28 crc kubenswrapper[4922]: I0126 15:26:28.336011 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:29 crc kubenswrapper[4922]: I0126 15:26:29.003180 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:29 crc kubenswrapper[4922]: I0126 15:26:29.058111 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79nnr"] Jan 26 15:26:29 crc kubenswrapper[4922]: I0126 15:26:29.438086 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-79nnr" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="registry-server" containerID="cri-o://4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c" gracePeriod=2 Jan 26 15:26:29 crc kubenswrapper[4922]: I0126 15:26:29.923743 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.029138 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-utilities\") pod \"530d4639-cc1a-4f9e-adcb-9075e846cd75\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.029246 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-catalog-content\") pod \"530d4639-cc1a-4f9e-adcb-9075e846cd75\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.029351 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7drr\" (UniqueName: \"kubernetes.io/projected/530d4639-cc1a-4f9e-adcb-9075e846cd75-kube-api-access-n7drr\") pod \"530d4639-cc1a-4f9e-adcb-9075e846cd75\" (UID: \"530d4639-cc1a-4f9e-adcb-9075e846cd75\") " Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.030967 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-utilities" (OuterVolumeSpecName: "utilities") pod "530d4639-cc1a-4f9e-adcb-9075e846cd75" (UID: "530d4639-cc1a-4f9e-adcb-9075e846cd75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.037886 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530d4639-cc1a-4f9e-adcb-9075e846cd75-kube-api-access-n7drr" (OuterVolumeSpecName: "kube-api-access-n7drr") pod "530d4639-cc1a-4f9e-adcb-9075e846cd75" (UID: "530d4639-cc1a-4f9e-adcb-9075e846cd75"). InnerVolumeSpecName "kube-api-access-n7drr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.133730 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.133778 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7drr\" (UniqueName: \"kubernetes.io/projected/530d4639-cc1a-4f9e-adcb-9075e846cd75-kube-api-access-n7drr\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.155462 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "530d4639-cc1a-4f9e-adcb-9075e846cd75" (UID: "530d4639-cc1a-4f9e-adcb-9075e846cd75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.235893 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530d4639-cc1a-4f9e-adcb-9075e846cd75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.451286 4922 generic.go:334] "Generic (PLEG): container finished" podID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerID="4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c" exitCode=0 Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.451333 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerDied","Data":"4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c"} Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.451361 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79nnr" event={"ID":"530d4639-cc1a-4f9e-adcb-9075e846cd75","Type":"ContainerDied","Data":"45c036f9e8e358f3c29a2f94e257c5ab58d7b883669f4c678f45cc265c509693"} Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.451367 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79nnr" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.451380 4922 scope.go:117] "RemoveContainer" containerID="4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.473573 4922 scope.go:117] "RemoveContainer" containerID="71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.490944 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79nnr"] Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.504216 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-79nnr"] Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.514932 4922 scope.go:117] "RemoveContainer" containerID="11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.561397 4922 scope.go:117] "RemoveContainer" containerID="4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c" Jan 26 15:26:30 crc kubenswrapper[4922]: E0126 15:26:30.562382 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c\": container with ID starting with 4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c not found: ID does not exist" containerID="4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.562429 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c"} err="failed to get container status \"4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c\": rpc error: code = NotFound desc = could not find container \"4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c\": container with ID starting with 4d230fca3444956a1eac5db5b6baf21b2633a6d831a386ba0fab59656153578c not found: ID does not exist" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.562460 4922 scope.go:117] "RemoveContainer" containerID="71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0" Jan 26 15:26:30 crc kubenswrapper[4922]: E0126 15:26:30.563254 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0\": container with ID starting with 71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0 not found: ID does not exist" containerID="71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.563327 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0"} err="failed to get container status \"71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0\": rpc error: code = NotFound desc = could not find container \"71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0\": container with ID starting with 71e3ab3a91e8b6de6749f87ac36e2e7355c1b5239e9451326a1618741ba795f0 not found: ID does not exist" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.563407 4922 scope.go:117] "RemoveContainer" containerID="11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c" Jan 26 15:26:30 crc kubenswrapper[4922]: E0126 15:26:30.563746 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c\": container with ID starting with 11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c not found: ID does not exist" containerID="11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c" Jan 26 15:26:30 crc kubenswrapper[4922]: I0126 15:26:30.563775 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c"} err="failed to get container status \"11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c\": rpc error: code = NotFound desc = could not find container \"11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c\": container with ID starting with 11af536fcac684932a999786bd994a29a0483364bdf3bdc5c5ea933bca7c3d3c not found: ID does not exist" Jan 26 15:26:31 crc kubenswrapper[4922]: I0126 15:26:31.103756 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" path="/var/lib/kubelet/pods/530d4639-cc1a-4f9e-adcb-9075e846cd75/volumes" Jan 26 15:26:32 crc kubenswrapper[4922]: I0126 15:26:32.093181 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:26:32 crc kubenswrapper[4922]: E0126 15:26:32.093848 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:26:46 crc kubenswrapper[4922]: I0126 15:26:46.093874 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:26:46 crc kubenswrapper[4922]: E0126 15:26:46.094944 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:26:59 crc kubenswrapper[4922]: I0126 15:26:59.408587 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-69b95496c5-qvg59" podUID="a2bcb723-e3e3-41f8-9704-10a1f8e78bd7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 15:27:00 crc kubenswrapper[4922]: I0126 15:27:00.093309 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:27:00 crc kubenswrapper[4922]: E0126 15:27:00.093927 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:27:13 crc kubenswrapper[4922]: I0126 15:27:13.099482 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:27:13 crc kubenswrapper[4922]: I0126 15:27:13.912439 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"11e8840569ec13caaecddc1e71cb0ead642eb51e9ee4c2cb681ef3154d4c8e27"} Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.882954 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tx6pw"] Jan 26 15:28:59 crc kubenswrapper[4922]: E0126 15:28:59.884538 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="extract-content" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.884564 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="extract-content" Jan 26 15:28:59 crc kubenswrapper[4922]: E0126 15:28:59.884605 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="registry-server" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.884613 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="registry-server" Jan 26 15:28:59 crc kubenswrapper[4922]: E0126 15:28:59.884630 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="extract-utilities" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.884639 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="extract-utilities" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.884916 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="530d4639-cc1a-4f9e-adcb-9075e846cd75" containerName="registry-server" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.886846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.939195 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx6pw"] Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.990144 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-utilities\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.990223 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg958\" (UniqueName: \"kubernetes.io/projected/e86343f4-539d-4d60-885d-e7b9c0c953bf-kube-api-access-cg958\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:28:59 crc kubenswrapper[4922]: I0126 15:28:59.990391 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-catalog-content\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.093451 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-catalog-content\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.093584 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-utilities\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.093647 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg958\" (UniqueName: \"kubernetes.io/projected/e86343f4-539d-4d60-885d-e7b9c0c953bf-kube-api-access-cg958\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.093960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-catalog-content\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.094044 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-utilities\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.118888 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg958\" (UniqueName: \"kubernetes.io/projected/e86343f4-539d-4d60-885d-e7b9c0c953bf-kube-api-access-cg958\") pod \"certified-operators-tx6pw\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.236554 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.777351 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx6pw"] Jan 26 15:29:00 crc kubenswrapper[4922]: I0126 15:29:00.966001 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6pw" event={"ID":"e86343f4-539d-4d60-885d-e7b9c0c953bf","Type":"ContainerStarted","Data":"6c780e4159f64d86820589947a3347a1d2cd3e9b4466ff000b1e769ed14d2338"} Jan 26 15:29:01 crc kubenswrapper[4922]: I0126 15:29:01.977562 4922 generic.go:334] "Generic (PLEG): container finished" podID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerID="0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d" exitCode=0 Jan 26 15:29:01 crc kubenswrapper[4922]: I0126 15:29:01.977660 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6pw" event={"ID":"e86343f4-539d-4d60-885d-e7b9c0c953bf","Type":"ContainerDied","Data":"0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d"} Jan 26 15:29:04 crc kubenswrapper[4922]: I0126 15:29:04.001377 4922 generic.go:334] "Generic (PLEG): container finished" podID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerID="199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87" exitCode=0 Jan 26 15:29:04 crc kubenswrapper[4922]: I0126 15:29:04.001454 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6pw" event={"ID":"e86343f4-539d-4d60-885d-e7b9c0c953bf","Type":"ContainerDied","Data":"199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87"} Jan 26 15:29:06 crc kubenswrapper[4922]: I0126 15:29:06.027628 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6pw" event={"ID":"e86343f4-539d-4d60-885d-e7b9c0c953bf","Type":"ContainerStarted","Data":"cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9"} Jan 26 15:29:06 crc kubenswrapper[4922]: I0126 15:29:06.053289 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tx6pw" podStartSLOduration=4.32823197 podStartE2EDuration="7.053267163s" podCreationTimestamp="2026-01-26 15:28:59 +0000 UTC" firstStartedPulling="2026-01-26 15:29:01.980482753 +0000 UTC m=+4759.182745525" lastFinishedPulling="2026-01-26 15:29:04.705517946 +0000 UTC m=+4761.907780718" observedRunningTime="2026-01-26 15:29:06.048282086 +0000 UTC m=+4763.250544868" watchObservedRunningTime="2026-01-26 15:29:06.053267163 +0000 UTC m=+4763.255529935" Jan 26 15:29:10 crc kubenswrapper[4922]: I0126 15:29:10.237104 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:10 crc kubenswrapper[4922]: I0126 15:29:10.237711 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:10 crc kubenswrapper[4922]: I0126 15:29:10.301857 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:11 crc kubenswrapper[4922]: I0126 15:29:11.118628 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:11 crc kubenswrapper[4922]: I0126 15:29:11.172976 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx6pw"] Jan 26 15:29:13 crc kubenswrapper[4922]: I0126 15:29:13.088892 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tx6pw" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="registry-server" containerID="cri-o://cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9" gracePeriod=2 Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.085556 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.100209 4922 generic.go:334] "Generic (PLEG): container finished" podID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerID="cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9" exitCode=0 Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.100254 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6pw" event={"ID":"e86343f4-539d-4d60-885d-e7b9c0c953bf","Type":"ContainerDied","Data":"cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9"} Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.100280 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6pw" event={"ID":"e86343f4-539d-4d60-885d-e7b9c0c953bf","Type":"ContainerDied","Data":"6c780e4159f64d86820589947a3347a1d2cd3e9b4466ff000b1e769ed14d2338"} Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.100301 4922 scope.go:117] "RemoveContainer" containerID="cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.100328 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6pw" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.129214 4922 scope.go:117] "RemoveContainer" containerID="199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.149719 4922 scope.go:117] "RemoveContainer" containerID="0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.200470 4922 scope.go:117] "RemoveContainer" containerID="cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9" Jan 26 15:29:14 crc kubenswrapper[4922]: E0126 15:29:14.201038 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9\": container with ID starting with cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9 not found: ID does not exist" containerID="cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.201093 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9"} err="failed to get container status \"cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9\": rpc error: code = NotFound desc = could not find container \"cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9\": container with ID starting with cc855b44104ee78fa4081bfd5b74d31e124ce1ca8575893479266b50d2f821b9 not found: ID does not exist" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.201120 4922 scope.go:117] "RemoveContainer" containerID="199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87" Jan 26 15:29:14 crc kubenswrapper[4922]: E0126 15:29:14.201590 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87\": container with ID starting with 199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87 not found: ID does not exist" containerID="199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.201623 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87"} err="failed to get container status \"199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87\": rpc error: code = NotFound desc = could not find container \"199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87\": container with ID starting with 199a5d0a1b663e0afb551ae83ff1470c50c2d01f61768d82dfda813e609a1f87 not found: ID does not exist" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.201642 4922 scope.go:117] "RemoveContainer" containerID="0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d" Jan 26 15:29:14 crc kubenswrapper[4922]: E0126 15:29:14.202007 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d\": container with ID starting with 0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d not found: ID does not exist" containerID="0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.202032 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d"} err="failed to get container status \"0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d\": rpc error: code = NotFound desc = could not find container \"0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d\": container with ID starting with 0169941d04f9b40c7fcfed87dcca1b314576516788a5784ce63d2d9ab04ff85d not found: ID does not exist" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.219909 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg958\" (UniqueName: \"kubernetes.io/projected/e86343f4-539d-4d60-885d-e7b9c0c953bf-kube-api-access-cg958\") pod \"e86343f4-539d-4d60-885d-e7b9c0c953bf\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.219990 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-utilities\") pod \"e86343f4-539d-4d60-885d-e7b9c0c953bf\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.220055 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-catalog-content\") pod \"e86343f4-539d-4d60-885d-e7b9c0c953bf\" (UID: \"e86343f4-539d-4d60-885d-e7b9c0c953bf\") " Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.223994 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-utilities" (OuterVolumeSpecName: "utilities") pod "e86343f4-539d-4d60-885d-e7b9c0c953bf" (UID: "e86343f4-539d-4d60-885d-e7b9c0c953bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.228856 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86343f4-539d-4d60-885d-e7b9c0c953bf-kube-api-access-cg958" (OuterVolumeSpecName: "kube-api-access-cg958") pod "e86343f4-539d-4d60-885d-e7b9c0c953bf" (UID: "e86343f4-539d-4d60-885d-e7b9c0c953bf"). InnerVolumeSpecName "kube-api-access-cg958". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.274137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e86343f4-539d-4d60-885d-e7b9c0c953bf" (UID: "e86343f4-539d-4d60-885d-e7b9c0c953bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.323922 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg958\" (UniqueName: \"kubernetes.io/projected/e86343f4-539d-4d60-885d-e7b9c0c953bf-kube-api-access-cg958\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.323992 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.324008 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e86343f4-539d-4d60-885d-e7b9c0c953bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.439705 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx6pw"] Jan 26 15:29:14 crc kubenswrapper[4922]: I0126 15:29:14.448760 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tx6pw"] Jan 26 15:29:15 crc kubenswrapper[4922]: I0126 15:29:15.102852 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" path="/var/lib/kubelet/pods/e86343f4-539d-4d60-885d-e7b9c0c953bf/volumes" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.345340 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lckqx"] Jan 26 15:29:28 crc kubenswrapper[4922]: E0126 15:29:28.346359 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="extract-content" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.346375 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="extract-content" Jan 26 15:29:28 crc kubenswrapper[4922]: E0126 15:29:28.346404 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="registry-server" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.346411 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="registry-server" Jan 26 15:29:28 crc kubenswrapper[4922]: E0126 15:29:28.346432 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="extract-utilities" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.346439 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="extract-utilities" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.346653 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86343f4-539d-4d60-885d-e7b9c0c953bf" containerName="registry-server" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.348625 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.356819 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lckqx"] Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.392496 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-catalog-content\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.392606 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-utilities\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.392706 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8cfq\" (UniqueName: \"kubernetes.io/projected/4359a977-d388-4026-a699-4cc6a3354930-kube-api-access-n8cfq\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.495414 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-catalog-content\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.495810 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-utilities\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.495926 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8cfq\" (UniqueName: \"kubernetes.io/projected/4359a977-d388-4026-a699-4cc6a3354930-kube-api-access-n8cfq\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.495988 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-catalog-content\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.496233 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-utilities\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.531959 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8cfq\" (UniqueName: \"kubernetes.io/projected/4359a977-d388-4026-a699-4cc6a3354930-kube-api-access-n8cfq\") pod \"redhat-marketplace-lckqx\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:28 crc kubenswrapper[4922]: I0126 15:29:28.671253 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:29 crc kubenswrapper[4922]: I0126 15:29:29.177396 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lckqx"] Jan 26 15:29:29 crc kubenswrapper[4922]: I0126 15:29:29.252258 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerStarted","Data":"e17338ee13e1323386071c2b02d0e7ac3f5c95abdc33efcbd862b95f2634a83c"} Jan 26 15:29:30 crc kubenswrapper[4922]: I0126 15:29:30.266680 4922 generic.go:334] "Generic (PLEG): container finished" podID="4359a977-d388-4026-a699-4cc6a3354930" containerID="6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2" exitCode=0 Jan 26 15:29:30 crc kubenswrapper[4922]: I0126 15:29:30.266791 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerDied","Data":"6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2"} Jan 26 15:29:31 crc kubenswrapper[4922]: I0126 15:29:31.278424 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerStarted","Data":"d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0"} Jan 26 15:29:32 crc kubenswrapper[4922]: I0126 15:29:32.288910 4922 generic.go:334] "Generic (PLEG): container finished" podID="4359a977-d388-4026-a699-4cc6a3354930" containerID="d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0" exitCode=0 Jan 26 15:29:32 crc kubenswrapper[4922]: I0126 15:29:32.289036 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerDied","Data":"d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0"} Jan 26 15:29:33 crc kubenswrapper[4922]: I0126 15:29:33.302455 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerStarted","Data":"899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b"} Jan 26 15:29:33 crc kubenswrapper[4922]: I0126 15:29:33.318837 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lckqx" podStartSLOduration=2.770438235 podStartE2EDuration="5.318810443s" podCreationTimestamp="2026-01-26 15:29:28 +0000 UTC" firstStartedPulling="2026-01-26 15:29:30.26896963 +0000 UTC m=+4787.471232402" lastFinishedPulling="2026-01-26 15:29:32.817341838 +0000 UTC m=+4790.019604610" observedRunningTime="2026-01-26 15:29:33.317765203 +0000 UTC m=+4790.520027985" watchObservedRunningTime="2026-01-26 15:29:33.318810443 +0000 UTC m=+4790.521073215" Jan 26 15:29:38 crc kubenswrapper[4922]: I0126 15:29:38.671419 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:38 crc kubenswrapper[4922]: I0126 15:29:38.672111 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:38 crc kubenswrapper[4922]: I0126 15:29:38.727646 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:39 crc kubenswrapper[4922]: I0126 15:29:39.422592 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:39 crc kubenswrapper[4922]: I0126 15:29:39.483610 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lckqx"] Jan 26 15:29:41 crc kubenswrapper[4922]: I0126 15:29:41.307520 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:29:41 crc kubenswrapper[4922]: I0126 15:29:41.308189 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:29:41 crc kubenswrapper[4922]: I0126 15:29:41.380824 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lckqx" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="registry-server" containerID="cri-o://899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b" gracePeriod=2 Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.373494 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.413906 4922 generic.go:334] "Generic (PLEG): container finished" podID="4359a977-d388-4026-a699-4cc6a3354930" containerID="899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b" exitCode=0 Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.413948 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerDied","Data":"899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b"} Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.413973 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lckqx" event={"ID":"4359a977-d388-4026-a699-4cc6a3354930","Type":"ContainerDied","Data":"e17338ee13e1323386071c2b02d0e7ac3f5c95abdc33efcbd862b95f2634a83c"} Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.413991 4922 scope.go:117] "RemoveContainer" containerID="899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.414265 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lckqx" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.453152 4922 scope.go:117] "RemoveContainer" containerID="d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.478772 4922 scope.go:117] "RemoveContainer" containerID="6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.505963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8cfq\" (UniqueName: \"kubernetes.io/projected/4359a977-d388-4026-a699-4cc6a3354930-kube-api-access-n8cfq\") pod \"4359a977-d388-4026-a699-4cc6a3354930\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.506115 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-utilities\") pod \"4359a977-d388-4026-a699-4cc6a3354930\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.506222 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-catalog-content\") pod \"4359a977-d388-4026-a699-4cc6a3354930\" (UID: \"4359a977-d388-4026-a699-4cc6a3354930\") " Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.508677 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-utilities" (OuterVolumeSpecName: "utilities") pod "4359a977-d388-4026-a699-4cc6a3354930" (UID: "4359a977-d388-4026-a699-4cc6a3354930"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.516034 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4359a977-d388-4026-a699-4cc6a3354930-kube-api-access-n8cfq" (OuterVolumeSpecName: "kube-api-access-n8cfq") pod "4359a977-d388-4026-a699-4cc6a3354930" (UID: "4359a977-d388-4026-a699-4cc6a3354930"). InnerVolumeSpecName "kube-api-access-n8cfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.530744 4922 scope.go:117] "RemoveContainer" containerID="899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b" Jan 26 15:29:42 crc kubenswrapper[4922]: E0126 15:29:42.531331 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b\": container with ID starting with 899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b not found: ID does not exist" containerID="899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.531425 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b"} err="failed to get container status \"899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b\": rpc error: code = NotFound desc = could not find container \"899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b\": container with ID starting with 899a6177cd24e296f9b9a6a7cbd6280365744411322c9a12255b1b635d12d78b not found: ID does not exist" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.531556 4922 scope.go:117] "RemoveContainer" containerID="d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0" Jan 26 15:29:42 crc kubenswrapper[4922]: E0126 15:29:42.532198 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0\": container with ID starting with d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0 not found: ID does not exist" containerID="d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.532250 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0"} err="failed to get container status \"d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0\": rpc error: code = NotFound desc = could not find container \"d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0\": container with ID starting with d304183447d4fcb7a046c678a57b97fa04ee89a94409cd46104577d0cdf00bd0 not found: ID does not exist" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.532286 4922 scope.go:117] "RemoveContainer" containerID="6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.533233 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4359a977-d388-4026-a699-4cc6a3354930" (UID: "4359a977-d388-4026-a699-4cc6a3354930"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:29:42 crc kubenswrapper[4922]: E0126 15:29:42.533410 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2\": container with ID starting with 6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2 not found: ID does not exist" containerID="6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.533459 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2"} err="failed to get container status \"6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2\": rpc error: code = NotFound desc = could not find container \"6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2\": container with ID starting with 6345aa5bb485a6f82a282ec25cc70b90dd0c278c4a6a9344d5b58946dbc15cf2 not found: ID does not exist" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.610853 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8cfq\" (UniqueName: \"kubernetes.io/projected/4359a977-d388-4026-a699-4cc6a3354930-kube-api-access-n8cfq\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.610974 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.610993 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4359a977-d388-4026-a699-4cc6a3354930-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.767914 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lckqx"] Jan 26 15:29:42 crc kubenswrapper[4922]: I0126 15:29:42.778361 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lckqx"] Jan 26 15:29:43 crc kubenswrapper[4922]: I0126 15:29:43.104976 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4359a977-d388-4026-a699-4cc6a3354930" path="/var/lib/kubelet/pods/4359a977-d388-4026-a699-4cc6a3354930/volumes" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.151307 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j"] Jan 26 15:30:00 crc kubenswrapper[4922]: E0126 15:30:00.152273 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="extract-content" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.152288 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="extract-content" Jan 26 15:30:00 crc kubenswrapper[4922]: E0126 15:30:00.152300 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="extract-utilities" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.152307 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="extract-utilities" Jan 26 15:30:00 crc kubenswrapper[4922]: E0126 15:30:00.152331 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="registry-server" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.152339 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="registry-server" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.152565 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4359a977-d388-4026-a699-4cc6a3354930" containerName="registry-server" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.153328 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.155316 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.155376 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.160703 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j"] Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.264426 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxc74\" (UniqueName: \"kubernetes.io/projected/09216db7-f856-4a28-8250-75b2154d5df0-kube-api-access-rxc74\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.264810 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09216db7-f856-4a28-8250-75b2154d5df0-secret-volume\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.264905 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09216db7-f856-4a28-8250-75b2154d5df0-config-volume\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.365882 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09216db7-f856-4a28-8250-75b2154d5df0-config-volume\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.366019 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxc74\" (UniqueName: \"kubernetes.io/projected/09216db7-f856-4a28-8250-75b2154d5df0-kube-api-access-rxc74\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.366118 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09216db7-f856-4a28-8250-75b2154d5df0-secret-volume\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.366708 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09216db7-f856-4a28-8250-75b2154d5df0-config-volume\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.381177 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09216db7-f856-4a28-8250-75b2154d5df0-secret-volume\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.388736 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxc74\" (UniqueName: \"kubernetes.io/projected/09216db7-f856-4a28-8250-75b2154d5df0-kube-api-access-rxc74\") pod \"collect-profiles-29490690-4kj8j\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.488944 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:00 crc kubenswrapper[4922]: I0126 15:30:00.974679 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j"] Jan 26 15:30:01 crc kubenswrapper[4922]: I0126 15:30:01.594638 4922 generic.go:334] "Generic (PLEG): container finished" podID="09216db7-f856-4a28-8250-75b2154d5df0" containerID="0ee195f214a0fc5257a4330f0387e25db8ed6e080608287b195858b300ba892e" exitCode=0 Jan 26 15:30:01 crc kubenswrapper[4922]: I0126 15:30:01.594827 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" event={"ID":"09216db7-f856-4a28-8250-75b2154d5df0","Type":"ContainerDied","Data":"0ee195f214a0fc5257a4330f0387e25db8ed6e080608287b195858b300ba892e"} Jan 26 15:30:01 crc kubenswrapper[4922]: I0126 15:30:01.594971 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" event={"ID":"09216db7-f856-4a28-8250-75b2154d5df0","Type":"ContainerStarted","Data":"dcced48519ace2d1f0b8f6a0dcf9003ef30ba80eec08534c13506e7115817e04"} Jan 26 15:30:02 crc kubenswrapper[4922]: I0126 15:30:02.987883 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.120532 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09216db7-f856-4a28-8250-75b2154d5df0-config-volume\") pod \"09216db7-f856-4a28-8250-75b2154d5df0\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.120904 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxc74\" (UniqueName: \"kubernetes.io/projected/09216db7-f856-4a28-8250-75b2154d5df0-kube-api-access-rxc74\") pod \"09216db7-f856-4a28-8250-75b2154d5df0\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.120959 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09216db7-f856-4a28-8250-75b2154d5df0-secret-volume\") pod \"09216db7-f856-4a28-8250-75b2154d5df0\" (UID: \"09216db7-f856-4a28-8250-75b2154d5df0\") " Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.124382 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09216db7-f856-4a28-8250-75b2154d5df0-config-volume" (OuterVolumeSpecName: "config-volume") pod "09216db7-f856-4a28-8250-75b2154d5df0" (UID: "09216db7-f856-4a28-8250-75b2154d5df0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.129602 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09216db7-f856-4a28-8250-75b2154d5df0-kube-api-access-rxc74" (OuterVolumeSpecName: "kube-api-access-rxc74") pod "09216db7-f856-4a28-8250-75b2154d5df0" (UID: "09216db7-f856-4a28-8250-75b2154d5df0"). InnerVolumeSpecName "kube-api-access-rxc74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.130541 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09216db7-f856-4a28-8250-75b2154d5df0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "09216db7-f856-4a28-8250-75b2154d5df0" (UID: "09216db7-f856-4a28-8250-75b2154d5df0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.224505 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09216db7-f856-4a28-8250-75b2154d5df0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.224553 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09216db7-f856-4a28-8250-75b2154d5df0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.224566 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxc74\" (UniqueName: \"kubernetes.io/projected/09216db7-f856-4a28-8250-75b2154d5df0-kube-api-access-rxc74\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.619314 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" event={"ID":"09216db7-f856-4a28-8250-75b2154d5df0","Type":"ContainerDied","Data":"dcced48519ace2d1f0b8f6a0dcf9003ef30ba80eec08534c13506e7115817e04"} Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.619375 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4kj8j" Jan 26 15:30:03 crc kubenswrapper[4922]: I0126 15:30:03.619380 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcced48519ace2d1f0b8f6a0dcf9003ef30ba80eec08534c13506e7115817e04" Jan 26 15:30:04 crc kubenswrapper[4922]: I0126 15:30:04.080653 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t"] Jan 26 15:30:04 crc kubenswrapper[4922]: I0126 15:30:04.089750 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-zlj7t"] Jan 26 15:30:05 crc kubenswrapper[4922]: I0126 15:30:05.107206 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c" path="/var/lib/kubelet/pods/6825dd3e-7e80-4f5e-846e-f9ed3c7e9d5c/volumes" Jan 26 15:30:11 crc kubenswrapper[4922]: I0126 15:30:11.306491 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:30:11 crc kubenswrapper[4922]: I0126 15:30:11.307030 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:30:21 crc kubenswrapper[4922]: I0126 15:30:21.044249 4922 scope.go:117] "RemoveContainer" containerID="535ab1ea368304a28468c09383b87e151d44939f96cfefef531cd842780dd643" Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.306513 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.307099 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.307148 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.307965 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11e8840569ec13caaecddc1e71cb0ead642eb51e9ee4c2cb681ef3154d4c8e27"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.308011 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://11e8840569ec13caaecddc1e71cb0ead642eb51e9ee4c2cb681ef3154d4c8e27" gracePeriod=600 Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.987637 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="11e8840569ec13caaecddc1e71cb0ead642eb51e9ee4c2cb681ef3154d4c8e27" exitCode=0 Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.987722 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"11e8840569ec13caaecddc1e71cb0ead642eb51e9ee4c2cb681ef3154d4c8e27"} Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.988115 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254"} Jan 26 15:30:41 crc kubenswrapper[4922]: I0126 15:30:41.988142 4922 scope.go:117] "RemoveContainer" containerID="765e92c64550298c286e163420f655232aa3bb79edd9e7f57f297701614beb9f" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.118551 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlcfx"] Jan 26 15:32:04 crc kubenswrapper[4922]: E0126 15:32:04.119653 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09216db7-f856-4a28-8250-75b2154d5df0" containerName="collect-profiles" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.119673 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="09216db7-f856-4a28-8250-75b2154d5df0" containerName="collect-profiles" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.119953 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="09216db7-f856-4a28-8250-75b2154d5df0" containerName="collect-profiles" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.122156 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.133222 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlcfx"] Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.249437 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9bd\" (UniqueName: \"kubernetes.io/projected/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-kube-api-access-kv9bd\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.249795 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-catalog-content\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.249893 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-utilities\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.352080 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9bd\" (UniqueName: \"kubernetes.io/projected/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-kube-api-access-kv9bd\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.352162 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-catalog-content\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.352190 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-utilities\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.352820 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-catalog-content\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.352840 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-utilities\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.371861 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9bd\" (UniqueName: \"kubernetes.io/projected/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-kube-api-access-kv9bd\") pod \"community-operators-nlcfx\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.444562 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:04 crc kubenswrapper[4922]: I0126 15:32:04.968396 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlcfx"] Jan 26 15:32:05 crc kubenswrapper[4922]: I0126 15:32:05.788440 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerID="f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5" exitCode=0 Jan 26 15:32:05 crc kubenswrapper[4922]: I0126 15:32:05.788491 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerDied","Data":"f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5"} Jan 26 15:32:05 crc kubenswrapper[4922]: I0126 15:32:05.788887 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerStarted","Data":"d176d5a5597c6f102c03e800aff34f956fe350758f1ed0a6ea0bfe0c271db000"} Jan 26 15:32:05 crc kubenswrapper[4922]: I0126 15:32:05.790765 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:32:06 crc kubenswrapper[4922]: I0126 15:32:06.808124 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerStarted","Data":"28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f"} Jan 26 15:32:07 crc kubenswrapper[4922]: I0126 15:32:07.820426 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerID="28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f" exitCode=0 Jan 26 15:32:07 crc kubenswrapper[4922]: I0126 15:32:07.820493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerDied","Data":"28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f"} Jan 26 15:32:08 crc kubenswrapper[4922]: I0126 15:32:08.848585 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerStarted","Data":"923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad"} Jan 26 15:32:08 crc kubenswrapper[4922]: I0126 15:32:08.872787 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlcfx" podStartSLOduration=2.200861705 podStartE2EDuration="4.87275065s" podCreationTimestamp="2026-01-26 15:32:04 +0000 UTC" firstStartedPulling="2026-01-26 15:32:05.79052581 +0000 UTC m=+4942.992788582" lastFinishedPulling="2026-01-26 15:32:08.462414745 +0000 UTC m=+4945.664677527" observedRunningTime="2026-01-26 15:32:08.86766917 +0000 UTC m=+4946.069931942" watchObservedRunningTime="2026-01-26 15:32:08.87275065 +0000 UTC m=+4946.075013422" Jan 26 15:32:14 crc kubenswrapper[4922]: I0126 15:32:14.445133 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:14 crc kubenswrapper[4922]: I0126 15:32:14.445773 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:14 crc kubenswrapper[4922]: I0126 15:32:14.493953 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:14 crc kubenswrapper[4922]: I0126 15:32:14.978795 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:15 crc kubenswrapper[4922]: I0126 15:32:15.076702 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlcfx"] Jan 26 15:32:16 crc kubenswrapper[4922]: I0126 15:32:16.932552 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlcfx" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="registry-server" containerID="cri-o://923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad" gracePeriod=2 Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.501593 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.658652 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-utilities\") pod \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.658767 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-catalog-content\") pod \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.659041 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv9bd\" (UniqueName: \"kubernetes.io/projected/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-kube-api-access-kv9bd\") pod \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\" (UID: \"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d\") " Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.659945 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-utilities" (OuterVolumeSpecName: "utilities") pod "6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" (UID: "6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.676300 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-kube-api-access-kv9bd" (OuterVolumeSpecName: "kube-api-access-kv9bd") pod "6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" (UID: "6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d"). InnerVolumeSpecName "kube-api-access-kv9bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.725651 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" (UID: "6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.761104 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.761146 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv9bd\" (UniqueName: \"kubernetes.io/projected/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-kube-api-access-kv9bd\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.761159 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.943459 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerID="923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad" exitCode=0 Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.943505 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerDied","Data":"923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad"} Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.943532 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcfx" event={"ID":"6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d","Type":"ContainerDied","Data":"d176d5a5597c6f102c03e800aff34f956fe350758f1ed0a6ea0bfe0c271db000"} Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.943549 4922 scope.go:117] "RemoveContainer" containerID="923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.943594 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcfx" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.964071 4922 scope.go:117] "RemoveContainer" containerID="28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f" Jan 26 15:32:17 crc kubenswrapper[4922]: I0126 15:32:17.995856 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlcfx"] Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.012873 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlcfx"] Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.016352 4922 scope.go:117] "RemoveContainer" containerID="f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5" Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.060283 4922 scope.go:117] "RemoveContainer" containerID="923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad" Jan 26 15:32:18 crc kubenswrapper[4922]: E0126 15:32:18.060903 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad\": container with ID starting with 923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad not found: ID does not exist" containerID="923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad" Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.060949 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad"} err="failed to get container status \"923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad\": rpc error: code = NotFound desc = could not find container \"923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad\": container with ID starting with 923ca8f83812783557a93380bdf38e147eee7dbca803e69d09745f328ffd39ad not found: ID does not exist" Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.060980 4922 scope.go:117] "RemoveContainer" containerID="28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f" Jan 26 15:32:18 crc kubenswrapper[4922]: E0126 15:32:18.061475 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f\": container with ID starting with 28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f not found: ID does not exist" containerID="28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f" Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.061517 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f"} err="failed to get container status \"28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f\": rpc error: code = NotFound desc = could not find container \"28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f\": container with ID starting with 28845acbc96ede7536b7ce8961408c7f178f6ced244e2217b5775f35ce04071f not found: ID does not exist" Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.061550 4922 scope.go:117] "RemoveContainer" containerID="f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5" Jan 26 15:32:18 crc kubenswrapper[4922]: E0126 15:32:18.069078 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5\": container with ID starting with f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5 not found: ID does not exist" containerID="f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5" Jan 26 15:32:18 crc kubenswrapper[4922]: I0126 15:32:18.069130 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5"} err="failed to get container status \"f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5\": rpc error: code = NotFound desc = could not find container \"f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5\": container with ID starting with f8d956de9124ab350dcf73ce93eda35d5c442292b8c50ffb080f80192e9b6eb5 not found: ID does not exist" Jan 26 15:32:19 crc kubenswrapper[4922]: I0126 15:32:19.104652 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" path="/var/lib/kubelet/pods/6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d/volumes" Jan 26 15:32:41 crc kubenswrapper[4922]: I0126 15:32:41.306613 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:32:41 crc kubenswrapper[4922]: I0126 15:32:41.307165 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:33:11 crc kubenswrapper[4922]: I0126 15:33:11.307987 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:33:11 crc kubenswrapper[4922]: I0126 15:33:11.308695 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.306441 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.307038 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.307097 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.307667 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.307721 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" gracePeriod=600 Jan 26 15:33:41 crc kubenswrapper[4922]: E0126 15:33:41.430474 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.787176 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" exitCode=0 Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.787210 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254"} Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.787812 4922 scope.go:117] "RemoveContainer" containerID="11e8840569ec13caaecddc1e71cb0ead642eb51e9ee4c2cb681ef3154d4c8e27" Jan 26 15:33:41 crc kubenswrapper[4922]: I0126 15:33:41.788523 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:33:41 crc kubenswrapper[4922]: E0126 15:33:41.788890 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:33:53 crc kubenswrapper[4922]: I0126 15:33:53.098352 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:33:53 crc kubenswrapper[4922]: E0126 15:33:53.099363 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:34:07 crc kubenswrapper[4922]: I0126 15:34:07.092523 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:34:07 crc kubenswrapper[4922]: E0126 15:34:07.093707 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:34:20 crc kubenswrapper[4922]: I0126 15:34:20.093914 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:34:20 crc kubenswrapper[4922]: E0126 15:34:20.096528 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:34:32 crc kubenswrapper[4922]: I0126 15:34:32.093394 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:34:32 crc kubenswrapper[4922]: E0126 15:34:32.095485 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:34:45 crc kubenswrapper[4922]: I0126 15:34:45.092956 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:34:45 crc kubenswrapper[4922]: E0126 15:34:45.093893 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:34:57 crc kubenswrapper[4922]: I0126 15:34:57.093358 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:34:57 crc kubenswrapper[4922]: E0126 15:34:57.094190 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:35:12 crc kubenswrapper[4922]: I0126 15:35:12.093129 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:35:12 crc kubenswrapper[4922]: E0126 15:35:12.094034 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:35:24 crc kubenswrapper[4922]: I0126 15:35:24.093765 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:35:24 crc kubenswrapper[4922]: E0126 15:35:24.094696 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:35:35 crc kubenswrapper[4922]: I0126 15:35:35.092277 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:35:35 crc kubenswrapper[4922]: E0126 15:35:35.092975 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:35:49 crc kubenswrapper[4922]: I0126 15:35:49.092473 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:35:49 crc kubenswrapper[4922]: E0126 15:35:49.093371 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:36:02 crc kubenswrapper[4922]: I0126 15:36:02.092988 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:36:02 crc kubenswrapper[4922]: E0126 15:36:02.093701 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:36:15 crc kubenswrapper[4922]: I0126 15:36:15.092380 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:36:15 crc kubenswrapper[4922]: E0126 15:36:15.093217 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:36:28 crc kubenswrapper[4922]: I0126 15:36:28.092822 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:36:28 crc kubenswrapper[4922]: E0126 15:36:28.095393 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:36:43 crc kubenswrapper[4922]: I0126 15:36:43.104683 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:36:43 crc kubenswrapper[4922]: E0126 15:36:43.105573 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:36:54 crc kubenswrapper[4922]: I0126 15:36:54.092854 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:36:54 crc kubenswrapper[4922]: E0126 15:36:54.093539 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:37:08 crc kubenswrapper[4922]: I0126 15:37:08.092166 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:37:08 crc kubenswrapper[4922]: E0126 15:37:08.092980 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:37:19 crc kubenswrapper[4922]: I0126 15:37:19.092996 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:37:19 crc kubenswrapper[4922]: E0126 15:37:19.094014 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.100702 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:37:33 crc kubenswrapper[4922]: E0126 15:37:33.101847 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.129025 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rpzrr"] Jan 26 15:37:33 crc kubenswrapper[4922]: E0126 15:37:33.129562 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="registry-server" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.129585 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="registry-server" Jan 26 15:37:33 crc kubenswrapper[4922]: E0126 15:37:33.129603 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="extract-content" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.129613 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="extract-content" Jan 26 15:37:33 crc kubenswrapper[4922]: E0126 15:37:33.129632 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="extract-utilities" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.129641 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="extract-utilities" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.129907 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba8cb1a-4c6c-4d85-990e-4a6d53c1332d" containerName="registry-server" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.131909 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.152796 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rpzrr"] Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.251671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtk6\" (UniqueName: \"kubernetes.io/projected/0051a552-1dea-4470-a4fd-e7d14a69f88b-kube-api-access-cvtk6\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.251947 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-utilities\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.252982 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-catalog-content\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.354918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-utilities\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.354973 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-catalog-content\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.355016 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvtk6\" (UniqueName: \"kubernetes.io/projected/0051a552-1dea-4470-a4fd-e7d14a69f88b-kube-api-access-cvtk6\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.355980 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-utilities\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.356321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-catalog-content\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.376116 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvtk6\" (UniqueName: \"kubernetes.io/projected/0051a552-1dea-4470-a4fd-e7d14a69f88b-kube-api-access-cvtk6\") pod \"redhat-operators-rpzrr\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:33 crc kubenswrapper[4922]: I0126 15:37:33.456888 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:34 crc kubenswrapper[4922]: I0126 15:37:34.009703 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rpzrr"] Jan 26 15:37:34 crc kubenswrapper[4922]: I0126 15:37:34.203737 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerStarted","Data":"f2c5433481e869c69280e0500717f3ca7c5dcd6b57f0d76ff6727171a792a868"} Jan 26 15:37:35 crc kubenswrapper[4922]: I0126 15:37:35.219677 4922 generic.go:334] "Generic (PLEG): container finished" podID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerID="235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24" exitCode=0 Jan 26 15:37:35 crc kubenswrapper[4922]: I0126 15:37:35.219798 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerDied","Data":"235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24"} Jan 26 15:37:35 crc kubenswrapper[4922]: I0126 15:37:35.223552 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:37:36 crc kubenswrapper[4922]: I0126 15:37:36.232289 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerStarted","Data":"6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43"} Jan 26 15:37:41 crc kubenswrapper[4922]: I0126 15:37:41.277803 4922 generic.go:334] "Generic (PLEG): container finished" podID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerID="6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43" exitCode=0 Jan 26 15:37:41 crc kubenswrapper[4922]: I0126 15:37:41.277910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerDied","Data":"6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43"} Jan 26 15:37:42 crc kubenswrapper[4922]: I0126 15:37:42.292578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerStarted","Data":"c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f"} Jan 26 15:37:42 crc kubenswrapper[4922]: I0126 15:37:42.317346 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rpzrr" podStartSLOduration=2.7732756 podStartE2EDuration="9.317328101s" podCreationTimestamp="2026-01-26 15:37:33 +0000 UTC" firstStartedPulling="2026-01-26 15:37:35.223223749 +0000 UTC m=+5272.425486521" lastFinishedPulling="2026-01-26 15:37:41.76727625 +0000 UTC m=+5278.969539022" observedRunningTime="2026-01-26 15:37:42.313114775 +0000 UTC m=+5279.515377577" watchObservedRunningTime="2026-01-26 15:37:42.317328101 +0000 UTC m=+5279.519590873" Jan 26 15:37:43 crc kubenswrapper[4922]: I0126 15:37:43.457129 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:43 crc kubenswrapper[4922]: I0126 15:37:43.457480 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:44 crc kubenswrapper[4922]: I0126 15:37:44.507174 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rpzrr" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="registry-server" probeResult="failure" output=< Jan 26 15:37:44 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 15:37:44 crc kubenswrapper[4922]: > Jan 26 15:37:47 crc kubenswrapper[4922]: I0126 15:37:47.092302 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:37:47 crc kubenswrapper[4922]: E0126 15:37:47.093074 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:37:53 crc kubenswrapper[4922]: I0126 15:37:53.504550 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:53 crc kubenswrapper[4922]: I0126 15:37:53.562609 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:53 crc kubenswrapper[4922]: I0126 15:37:53.747007 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rpzrr"] Jan 26 15:37:55 crc kubenswrapper[4922]: I0126 15:37:55.421275 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rpzrr" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="registry-server" containerID="cri-o://c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f" gracePeriod=2 Jan 26 15:37:55 crc kubenswrapper[4922]: I0126 15:37:55.957757 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.086259 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-catalog-content\") pod \"0051a552-1dea-4470-a4fd-e7d14a69f88b\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.086440 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvtk6\" (UniqueName: \"kubernetes.io/projected/0051a552-1dea-4470-a4fd-e7d14a69f88b-kube-api-access-cvtk6\") pod \"0051a552-1dea-4470-a4fd-e7d14a69f88b\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.086518 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-utilities\") pod \"0051a552-1dea-4470-a4fd-e7d14a69f88b\" (UID: \"0051a552-1dea-4470-a4fd-e7d14a69f88b\") " Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.087753 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-utilities" (OuterVolumeSpecName: "utilities") pod "0051a552-1dea-4470-a4fd-e7d14a69f88b" (UID: "0051a552-1dea-4470-a4fd-e7d14a69f88b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.095134 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0051a552-1dea-4470-a4fd-e7d14a69f88b-kube-api-access-cvtk6" (OuterVolumeSpecName: "kube-api-access-cvtk6") pod "0051a552-1dea-4470-a4fd-e7d14a69f88b" (UID: "0051a552-1dea-4470-a4fd-e7d14a69f88b"). InnerVolumeSpecName "kube-api-access-cvtk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.189911 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvtk6\" (UniqueName: \"kubernetes.io/projected/0051a552-1dea-4470-a4fd-e7d14a69f88b-kube-api-access-cvtk6\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.190412 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.222882 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0051a552-1dea-4470-a4fd-e7d14a69f88b" (UID: "0051a552-1dea-4470-a4fd-e7d14a69f88b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.292955 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0051a552-1dea-4470-a4fd-e7d14a69f88b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.434319 4922 generic.go:334] "Generic (PLEG): container finished" podID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerID="c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f" exitCode=0 Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.434369 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerDied","Data":"c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f"} Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.434402 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpzrr" event={"ID":"0051a552-1dea-4470-a4fd-e7d14a69f88b","Type":"ContainerDied","Data":"f2c5433481e869c69280e0500717f3ca7c5dcd6b57f0d76ff6727171a792a868"} Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.434422 4922 scope.go:117] "RemoveContainer" containerID="c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.434417 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpzrr" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.476585 4922 scope.go:117] "RemoveContainer" containerID="6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.481686 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rpzrr"] Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.490541 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rpzrr"] Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.505939 4922 scope.go:117] "RemoveContainer" containerID="235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.562698 4922 scope.go:117] "RemoveContainer" containerID="c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f" Jan 26 15:37:56 crc kubenswrapper[4922]: E0126 15:37:56.563673 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f\": container with ID starting with c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f not found: ID does not exist" containerID="c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.563745 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f"} err="failed to get container status \"c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f\": rpc error: code = NotFound desc = could not find container \"c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f\": container with ID starting with c50788d04e4e7e66d44feefa2a9634882c4752ebab20427c6b5847a42a1e742f not found: ID does not exist" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.563792 4922 scope.go:117] "RemoveContainer" containerID="6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43" Jan 26 15:37:56 crc kubenswrapper[4922]: E0126 15:37:56.564461 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43\": container with ID starting with 6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43 not found: ID does not exist" containerID="6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.564502 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43"} err="failed to get container status \"6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43\": rpc error: code = NotFound desc = could not find container \"6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43\": container with ID starting with 6086928085b7028d801efa87936991f96720c7d6c5e5ad4135fe9c58376bba43 not found: ID does not exist" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.564532 4922 scope.go:117] "RemoveContainer" containerID="235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24" Jan 26 15:37:56 crc kubenswrapper[4922]: E0126 15:37:56.564998 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24\": container with ID starting with 235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24 not found: ID does not exist" containerID="235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24" Jan 26 15:37:56 crc kubenswrapper[4922]: I0126 15:37:56.565056 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24"} err="failed to get container status \"235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24\": rpc error: code = NotFound desc = could not find container \"235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24\": container with ID starting with 235391a00f9ff62850fe11fcb0c4e5b6cf49661de77dbea37c25d285da4c6d24 not found: ID does not exist" Jan 26 15:37:57 crc kubenswrapper[4922]: I0126 15:37:57.104421 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" path="/var/lib/kubelet/pods/0051a552-1dea-4470-a4fd-e7d14a69f88b/volumes" Jan 26 15:37:59 crc kubenswrapper[4922]: I0126 15:37:59.092825 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:37:59 crc kubenswrapper[4922]: E0126 15:37:59.093407 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:38:10 crc kubenswrapper[4922]: I0126 15:38:10.093289 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:38:10 crc kubenswrapper[4922]: E0126 15:38:10.095080 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:38:23 crc kubenswrapper[4922]: I0126 15:38:23.106184 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:38:23 crc kubenswrapper[4922]: E0126 15:38:23.107382 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:38:37 crc kubenswrapper[4922]: I0126 15:38:37.092949 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:38:37 crc kubenswrapper[4922]: E0126 15:38:37.093694 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:38:48 crc kubenswrapper[4922]: I0126 15:38:48.092833 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:38:48 crc kubenswrapper[4922]: I0126 15:38:48.960718 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"30f2f52da32c072f15496ac9a2d78ed8632070590edf42b4139a212a500d3460"} Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.894506 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jgdc"] Jan 26 15:39:07 crc kubenswrapper[4922]: E0126 15:39:07.895596 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="extract-content" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.895611 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="extract-content" Jan 26 15:39:07 crc kubenswrapper[4922]: E0126 15:39:07.895626 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="extract-utilities" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.895633 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="extract-utilities" Jan 26 15:39:07 crc kubenswrapper[4922]: E0126 15:39:07.895650 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="registry-server" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.895657 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="registry-server" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.895897 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051a552-1dea-4470-a4fd-e7d14a69f88b" containerName="registry-server" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.897616 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.906349 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jgdc"] Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.973587 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-utilities\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.974115 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq769\" (UniqueName: \"kubernetes.io/projected/b98cb77d-6126-4e89-9f1f-a113974f2b5b-kube-api-access-rq769\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:07 crc kubenswrapper[4922]: I0126 15:39:07.974254 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-catalog-content\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.076569 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq769\" (UniqueName: \"kubernetes.io/projected/b98cb77d-6126-4e89-9f1f-a113974f2b5b-kube-api-access-rq769\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.076689 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-catalog-content\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.076809 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-utilities\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.077303 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-catalog-content\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.077358 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-utilities\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.170983 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq769\" (UniqueName: \"kubernetes.io/projected/b98cb77d-6126-4e89-9f1f-a113974f2b5b-kube-api-access-rq769\") pod \"certified-operators-8jgdc\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.225454 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:08 crc kubenswrapper[4922]: I0126 15:39:08.779739 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jgdc"] Jan 26 15:39:09 crc kubenswrapper[4922]: I0126 15:39:09.147892 4922 generic.go:334] "Generic (PLEG): container finished" podID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerID="1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8" exitCode=0 Jan 26 15:39:09 crc kubenswrapper[4922]: I0126 15:39:09.147951 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jgdc" event={"ID":"b98cb77d-6126-4e89-9f1f-a113974f2b5b","Type":"ContainerDied","Data":"1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8"} Jan 26 15:39:09 crc kubenswrapper[4922]: I0126 15:39:09.149708 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jgdc" event={"ID":"b98cb77d-6126-4e89-9f1f-a113974f2b5b","Type":"ContainerStarted","Data":"c88e2033f74166b8de238bf295670d97e36530223ae7994b4d708dca3d8cc628"} Jan 26 15:39:11 crc kubenswrapper[4922]: I0126 15:39:11.172552 4922 generic.go:334] "Generic (PLEG): container finished" podID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerID="bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f" exitCode=0 Jan 26 15:39:11 crc kubenswrapper[4922]: I0126 15:39:11.172952 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jgdc" event={"ID":"b98cb77d-6126-4e89-9f1f-a113974f2b5b","Type":"ContainerDied","Data":"bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f"} Jan 26 15:39:12 crc kubenswrapper[4922]: I0126 15:39:12.188691 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jgdc" event={"ID":"b98cb77d-6126-4e89-9f1f-a113974f2b5b","Type":"ContainerStarted","Data":"ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7"} Jan 26 15:39:12 crc kubenswrapper[4922]: I0126 15:39:12.214047 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jgdc" podStartSLOduration=2.751812841 podStartE2EDuration="5.21402159s" podCreationTimestamp="2026-01-26 15:39:07 +0000 UTC" firstStartedPulling="2026-01-26 15:39:09.149487599 +0000 UTC m=+5366.351750371" lastFinishedPulling="2026-01-26 15:39:11.611696348 +0000 UTC m=+5368.813959120" observedRunningTime="2026-01-26 15:39:12.206573925 +0000 UTC m=+5369.408836697" watchObservedRunningTime="2026-01-26 15:39:12.21402159 +0000 UTC m=+5369.416284362" Jan 26 15:39:18 crc kubenswrapper[4922]: I0126 15:39:18.225697 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:18 crc kubenswrapper[4922]: I0126 15:39:18.226329 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:18 crc kubenswrapper[4922]: I0126 15:39:18.284832 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:18 crc kubenswrapper[4922]: I0126 15:39:18.333995 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:18 crc kubenswrapper[4922]: I0126 15:39:18.527897 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jgdc"] Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.265312 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jgdc" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="registry-server" containerID="cri-o://ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7" gracePeriod=2 Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.733394 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.760726 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-utilities\") pod \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.760823 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq769\" (UniqueName: \"kubernetes.io/projected/b98cb77d-6126-4e89-9f1f-a113974f2b5b-kube-api-access-rq769\") pod \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.761006 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-catalog-content\") pod \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\" (UID: \"b98cb77d-6126-4e89-9f1f-a113974f2b5b\") " Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.761539 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-utilities" (OuterVolumeSpecName: "utilities") pod "b98cb77d-6126-4e89-9f1f-a113974f2b5b" (UID: "b98cb77d-6126-4e89-9f1f-a113974f2b5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.769112 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98cb77d-6126-4e89-9f1f-a113974f2b5b-kube-api-access-rq769" (OuterVolumeSpecName: "kube-api-access-rq769") pod "b98cb77d-6126-4e89-9f1f-a113974f2b5b" (UID: "b98cb77d-6126-4e89-9f1f-a113974f2b5b"). InnerVolumeSpecName "kube-api-access-rq769". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.834448 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b98cb77d-6126-4e89-9f1f-a113974f2b5b" (UID: "b98cb77d-6126-4e89-9f1f-a113974f2b5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.863930 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq769\" (UniqueName: \"kubernetes.io/projected/b98cb77d-6126-4e89-9f1f-a113974f2b5b-kube-api-access-rq769\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.863966 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:20 crc kubenswrapper[4922]: I0126 15:39:20.863975 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98cb77d-6126-4e89-9f1f-a113974f2b5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:21 crc kubenswrapper[4922]: E0126 15:39:21.196461 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb98cb77d_6126_4e89_9f1f_a113974f2b5b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb98cb77d_6126_4e89_9f1f_a113974f2b5b.slice/crio-c88e2033f74166b8de238bf295670d97e36530223ae7994b4d708dca3d8cc628\": RecentStats: unable to find data in memory cache]" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.276494 4922 generic.go:334] "Generic (PLEG): container finished" podID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerID="ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7" exitCode=0 Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.276549 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jgdc" event={"ID":"b98cb77d-6126-4e89-9f1f-a113974f2b5b","Type":"ContainerDied","Data":"ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7"} Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.276572 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jgdc" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.276604 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jgdc" event={"ID":"b98cb77d-6126-4e89-9f1f-a113974f2b5b","Type":"ContainerDied","Data":"c88e2033f74166b8de238bf295670d97e36530223ae7994b4d708dca3d8cc628"} Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.276632 4922 scope.go:117] "RemoveContainer" containerID="ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.307293 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jgdc"] Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.313156 4922 scope.go:117] "RemoveContainer" containerID="bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.317422 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jgdc"] Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.384890 4922 scope.go:117] "RemoveContainer" containerID="1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.430331 4922 scope.go:117] "RemoveContainer" containerID="ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7" Jan 26 15:39:21 crc kubenswrapper[4922]: E0126 15:39:21.430962 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7\": container with ID starting with ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7 not found: ID does not exist" containerID="ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.431012 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7"} err="failed to get container status \"ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7\": rpc error: code = NotFound desc = could not find container \"ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7\": container with ID starting with ce9c25ae86f5d57bcef7d3d165283c6a1ea601763685f0f705038f9bd50cb4e7 not found: ID does not exist" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.431044 4922 scope.go:117] "RemoveContainer" containerID="bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f" Jan 26 15:39:21 crc kubenswrapper[4922]: E0126 15:39:21.432304 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f\": container with ID starting with bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f not found: ID does not exist" containerID="bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.432338 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f"} err="failed to get container status \"bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f\": rpc error: code = NotFound desc = could not find container \"bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f\": container with ID starting with bf0ac5ad302bc8f9524ddfeb212186d59537cd804c5c9b8e45ba0027db2ef62f not found: ID does not exist" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.432356 4922 scope.go:117] "RemoveContainer" containerID="1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8" Jan 26 15:39:21 crc kubenswrapper[4922]: E0126 15:39:21.432714 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8\": container with ID starting with 1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8 not found: ID does not exist" containerID="1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8" Jan 26 15:39:21 crc kubenswrapper[4922]: I0126 15:39:21.432811 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8"} err="failed to get container status \"1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8\": rpc error: code = NotFound desc = could not find container \"1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8\": container with ID starting with 1e53e3579e78bd8607f22ade229b6fec952a16a8dcc04dc36c6c451f775022f8 not found: ID does not exist" Jan 26 15:39:23 crc kubenswrapper[4922]: I0126 15:39:23.119660 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" path="/var/lib/kubelet/pods/b98cb77d-6126-4e89-9f1f-a113974f2b5b/volumes" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.164400 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2kcc6"] Jan 26 15:40:04 crc kubenswrapper[4922]: E0126 15:40:04.165571 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="extract-utilities" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.165588 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="extract-utilities" Jan 26 15:40:04 crc kubenswrapper[4922]: E0126 15:40:04.165611 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="registry-server" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.165619 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="registry-server" Jan 26 15:40:04 crc kubenswrapper[4922]: E0126 15:40:04.165649 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="extract-content" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.165658 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="extract-content" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.165988 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98cb77d-6126-4e89-9f1f-a113974f2b5b" containerName="registry-server" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.168006 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.176451 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-utilities\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.176639 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-catalog-content\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.176674 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chn9h\" (UniqueName: \"kubernetes.io/projected/3bd8ff73-2f47-44ac-a186-205f74c94680-kube-api-access-chn9h\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.181057 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kcc6"] Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.278373 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-utilities\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.278845 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-catalog-content\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.278923 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-utilities\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.279106 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chn9h\" (UniqueName: \"kubernetes.io/projected/3bd8ff73-2f47-44ac-a186-205f74c94680-kube-api-access-chn9h\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.280382 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-catalog-content\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.298613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chn9h\" (UniqueName: \"kubernetes.io/projected/3bd8ff73-2f47-44ac-a186-205f74c94680-kube-api-access-chn9h\") pod \"redhat-marketplace-2kcc6\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.500805 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:04 crc kubenswrapper[4922]: I0126 15:40:04.990522 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kcc6"] Jan 26 15:40:05 crc kubenswrapper[4922]: I0126 15:40:05.742935 4922 generic.go:334] "Generic (PLEG): container finished" podID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerID="8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e" exitCode=0 Jan 26 15:40:05 crc kubenswrapper[4922]: I0126 15:40:05.743139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kcc6" event={"ID":"3bd8ff73-2f47-44ac-a186-205f74c94680","Type":"ContainerDied","Data":"8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e"} Jan 26 15:40:05 crc kubenswrapper[4922]: I0126 15:40:05.743498 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kcc6" event={"ID":"3bd8ff73-2f47-44ac-a186-205f74c94680","Type":"ContainerStarted","Data":"72d2e316ecd904b5ebff033d7e4b5e8a2e7e81a0aa790927cfa39b9e68d0d242"} Jan 26 15:40:07 crc kubenswrapper[4922]: I0126 15:40:07.770471 4922 generic.go:334] "Generic (PLEG): container finished" podID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerID="30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198" exitCode=0 Jan 26 15:40:07 crc kubenswrapper[4922]: I0126 15:40:07.770577 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kcc6" event={"ID":"3bd8ff73-2f47-44ac-a186-205f74c94680","Type":"ContainerDied","Data":"30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198"} Jan 26 15:40:09 crc kubenswrapper[4922]: I0126 15:40:09.795052 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kcc6" event={"ID":"3bd8ff73-2f47-44ac-a186-205f74c94680","Type":"ContainerStarted","Data":"84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280"} Jan 26 15:40:09 crc kubenswrapper[4922]: I0126 15:40:09.824311 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2kcc6" podStartSLOduration=3.233074105 podStartE2EDuration="5.824286618s" podCreationTimestamp="2026-01-26 15:40:04 +0000 UTC" firstStartedPulling="2026-01-26 15:40:05.74474666 +0000 UTC m=+5422.947009442" lastFinishedPulling="2026-01-26 15:40:08.335959183 +0000 UTC m=+5425.538221955" observedRunningTime="2026-01-26 15:40:09.816461823 +0000 UTC m=+5427.018724595" watchObservedRunningTime="2026-01-26 15:40:09.824286618 +0000 UTC m=+5427.026549390" Jan 26 15:40:14 crc kubenswrapper[4922]: I0126 15:40:14.501246 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:14 crc kubenswrapper[4922]: I0126 15:40:14.501875 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:15 crc kubenswrapper[4922]: I0126 15:40:15.140802 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:15 crc kubenswrapper[4922]: I0126 15:40:15.192591 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:15 crc kubenswrapper[4922]: I0126 15:40:15.379144 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kcc6"] Jan 26 15:40:16 crc kubenswrapper[4922]: I0126 15:40:16.862410 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2kcc6" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="registry-server" containerID="cri-o://84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280" gracePeriod=2 Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.355337 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.476105 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-utilities\") pod \"3bd8ff73-2f47-44ac-a186-205f74c94680\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.476511 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chn9h\" (UniqueName: \"kubernetes.io/projected/3bd8ff73-2f47-44ac-a186-205f74c94680-kube-api-access-chn9h\") pod \"3bd8ff73-2f47-44ac-a186-205f74c94680\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.476656 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-catalog-content\") pod \"3bd8ff73-2f47-44ac-a186-205f74c94680\" (UID: \"3bd8ff73-2f47-44ac-a186-205f74c94680\") " Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.477297 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-utilities" (OuterVolumeSpecName: "utilities") pod "3bd8ff73-2f47-44ac-a186-205f74c94680" (UID: "3bd8ff73-2f47-44ac-a186-205f74c94680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.489615 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd8ff73-2f47-44ac-a186-205f74c94680-kube-api-access-chn9h" (OuterVolumeSpecName: "kube-api-access-chn9h") pod "3bd8ff73-2f47-44ac-a186-205f74c94680" (UID: "3bd8ff73-2f47-44ac-a186-205f74c94680"). InnerVolumeSpecName "kube-api-access-chn9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.499723 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bd8ff73-2f47-44ac-a186-205f74c94680" (UID: "3bd8ff73-2f47-44ac-a186-205f74c94680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.579808 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.580050 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chn9h\" (UniqueName: \"kubernetes.io/projected/3bd8ff73-2f47-44ac-a186-205f74c94680-kube-api-access-chn9h\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.580254 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd8ff73-2f47-44ac-a186-205f74c94680-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.871779 4922 generic.go:334] "Generic (PLEG): container finished" podID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerID="84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280" exitCode=0 Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.871838 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2kcc6" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.871845 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kcc6" event={"ID":"3bd8ff73-2f47-44ac-a186-205f74c94680","Type":"ContainerDied","Data":"84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280"} Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.872214 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2kcc6" event={"ID":"3bd8ff73-2f47-44ac-a186-205f74c94680","Type":"ContainerDied","Data":"72d2e316ecd904b5ebff033d7e4b5e8a2e7e81a0aa790927cfa39b9e68d0d242"} Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.872232 4922 scope.go:117] "RemoveContainer" containerID="84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.901336 4922 scope.go:117] "RemoveContainer" containerID="30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.906333 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kcc6"] Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.915994 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2kcc6"] Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.934887 4922 scope.go:117] "RemoveContainer" containerID="8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.993922 4922 scope.go:117] "RemoveContainer" containerID="84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280" Jan 26 15:40:17 crc kubenswrapper[4922]: E0126 15:40:17.994730 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280\": container with ID starting with 84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280 not found: ID does not exist" containerID="84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.994783 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280"} err="failed to get container status \"84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280\": rpc error: code = NotFound desc = could not find container \"84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280\": container with ID starting with 84b53ca43af0c2a5cdc0bb5901648a1736584a652aee43ee8e08b2e0f5caf280 not found: ID does not exist" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.994808 4922 scope.go:117] "RemoveContainer" containerID="30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198" Jan 26 15:40:17 crc kubenswrapper[4922]: E0126 15:40:17.995123 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198\": container with ID starting with 30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198 not found: ID does not exist" containerID="30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.995151 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198"} err="failed to get container status \"30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198\": rpc error: code = NotFound desc = could not find container \"30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198\": container with ID starting with 30a56033a41072d177cff2a58daa2020b920e3ef3a09042de6e1c04e670b8198 not found: ID does not exist" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.995169 4922 scope.go:117] "RemoveContainer" containerID="8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e" Jan 26 15:40:17 crc kubenswrapper[4922]: E0126 15:40:17.995486 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e\": container with ID starting with 8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e not found: ID does not exist" containerID="8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e" Jan 26 15:40:17 crc kubenswrapper[4922]: I0126 15:40:17.995510 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e"} err="failed to get container status \"8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e\": rpc error: code = NotFound desc = could not find container \"8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e\": container with ID starting with 8d3839cc3a408b087fc76d120ebd5e6272b49adcadad54b4ee1aea0aeae1d61e not found: ID does not exist" Jan 26 15:40:19 crc kubenswrapper[4922]: I0126 15:40:19.105277 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" path="/var/lib/kubelet/pods/3bd8ff73-2f47-44ac-a186-205f74c94680/volumes" Jan 26 15:41:11 crc kubenswrapper[4922]: I0126 15:41:11.306646 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:41:11 crc kubenswrapper[4922]: I0126 15:41:11.307274 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:41:41 crc kubenswrapper[4922]: I0126 15:41:41.306412 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:41:41 crc kubenswrapper[4922]: I0126 15:41:41.307079 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:42:11 crc kubenswrapper[4922]: I0126 15:42:11.306491 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:42:11 crc kubenswrapper[4922]: I0126 15:42:11.307035 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:42:11 crc kubenswrapper[4922]: I0126 15:42:11.307100 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:42:11 crc kubenswrapper[4922]: I0126 15:42:11.307844 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30f2f52da32c072f15496ac9a2d78ed8632070590edf42b4139a212a500d3460"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:42:11 crc kubenswrapper[4922]: I0126 15:42:11.307914 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://30f2f52da32c072f15496ac9a2d78ed8632070590edf42b4139a212a500d3460" gracePeriod=600 Jan 26 15:42:12 crc kubenswrapper[4922]: I0126 15:42:12.000977 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="30f2f52da32c072f15496ac9a2d78ed8632070590edf42b4139a212a500d3460" exitCode=0 Jan 26 15:42:12 crc kubenswrapper[4922]: I0126 15:42:12.001743 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"30f2f52da32c072f15496ac9a2d78ed8632070590edf42b4139a212a500d3460"} Jan 26 15:42:12 crc kubenswrapper[4922]: I0126 15:42:12.001781 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353"} Jan 26 15:42:12 crc kubenswrapper[4922]: I0126 15:42:12.001803 4922 scope.go:117] "RemoveContainer" containerID="d8579ea055c9fc7a3a82392d644f3d13c3a3df936526e7bc32c7a3f5cb6ce254" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.510777 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xlk8d"] Jan 26 15:42:16 crc kubenswrapper[4922]: E0126 15:42:16.512170 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="extract-utilities" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.512197 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="extract-utilities" Jan 26 15:42:16 crc kubenswrapper[4922]: E0126 15:42:16.512221 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="extract-content" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.512227 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="extract-content" Jan 26 15:42:16 crc kubenswrapper[4922]: E0126 15:42:16.512253 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="registry-server" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.512260 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="registry-server" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.512480 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd8ff73-2f47-44ac-a186-205f74c94680" containerName="registry-server" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.516914 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.528884 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlk8d"] Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.572265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-utilities\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.572374 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zln\" (UniqueName: \"kubernetes.io/projected/982cface-5bc8-4eef-b498-02fe1fa491c1-kube-api-access-d5zln\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.572521 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-catalog-content\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.674275 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zln\" (UniqueName: \"kubernetes.io/projected/982cface-5bc8-4eef-b498-02fe1fa491c1-kube-api-access-d5zln\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.674468 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-catalog-content\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.674693 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-utilities\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.674961 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-catalog-content\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.675307 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-utilities\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.706397 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zln\" (UniqueName: \"kubernetes.io/projected/982cface-5bc8-4eef-b498-02fe1fa491c1-kube-api-access-d5zln\") pod \"community-operators-xlk8d\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:16 crc kubenswrapper[4922]: I0126 15:42:16.836765 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:17 crc kubenswrapper[4922]: I0126 15:42:17.425599 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlk8d"] Jan 26 15:42:18 crc kubenswrapper[4922]: I0126 15:42:18.075347 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerStarted","Data":"d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1"} Jan 26 15:42:18 crc kubenswrapper[4922]: I0126 15:42:18.075667 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerStarted","Data":"99ed29856eefff0f14082c4ba33906debe173e59e09bc9506cc36236e06b6305"} Jan 26 15:42:19 crc kubenswrapper[4922]: I0126 15:42:19.086533 4922 generic.go:334] "Generic (PLEG): container finished" podID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerID="d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1" exitCode=0 Jan 26 15:42:19 crc kubenswrapper[4922]: I0126 15:42:19.086813 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerDied","Data":"d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1"} Jan 26 15:42:21 crc kubenswrapper[4922]: I0126 15:42:21.132588 4922 generic.go:334] "Generic (PLEG): container finished" podID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerID="78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb" exitCode=0 Jan 26 15:42:21 crc kubenswrapper[4922]: I0126 15:42:21.133225 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerDied","Data":"78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb"} Jan 26 15:42:23 crc kubenswrapper[4922]: I0126 15:42:23.157871 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerStarted","Data":"635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6"} Jan 26 15:42:23 crc kubenswrapper[4922]: I0126 15:42:23.181422 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xlk8d" podStartSLOduration=4.319735616 podStartE2EDuration="7.18140359s" podCreationTimestamp="2026-01-26 15:42:16 +0000 UTC" firstStartedPulling="2026-01-26 15:42:19.088876213 +0000 UTC m=+5556.291138985" lastFinishedPulling="2026-01-26 15:42:21.950544167 +0000 UTC m=+5559.152806959" observedRunningTime="2026-01-26 15:42:23.174407047 +0000 UTC m=+5560.376669819" watchObservedRunningTime="2026-01-26 15:42:23.18140359 +0000 UTC m=+5560.383666362" Jan 26 15:42:26 crc kubenswrapper[4922]: I0126 15:42:26.837592 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:26 crc kubenswrapper[4922]: I0126 15:42:26.838222 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:26 crc kubenswrapper[4922]: I0126 15:42:26.881562 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:27 crc kubenswrapper[4922]: I0126 15:42:27.247968 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:27 crc kubenswrapper[4922]: I0126 15:42:27.299018 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlk8d"] Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.217720 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xlk8d" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="registry-server" containerID="cri-o://635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6" gracePeriod=2 Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.696316 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.873000 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-utilities\") pod \"982cface-5bc8-4eef-b498-02fe1fa491c1\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.874367 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zln\" (UniqueName: \"kubernetes.io/projected/982cface-5bc8-4eef-b498-02fe1fa491c1-kube-api-access-d5zln\") pod \"982cface-5bc8-4eef-b498-02fe1fa491c1\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.874548 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-utilities" (OuterVolumeSpecName: "utilities") pod "982cface-5bc8-4eef-b498-02fe1fa491c1" (UID: "982cface-5bc8-4eef-b498-02fe1fa491c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.874702 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-catalog-content\") pod \"982cface-5bc8-4eef-b498-02fe1fa491c1\" (UID: \"982cface-5bc8-4eef-b498-02fe1fa491c1\") " Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.875826 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.891908 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982cface-5bc8-4eef-b498-02fe1fa491c1-kube-api-access-d5zln" (OuterVolumeSpecName: "kube-api-access-d5zln") pod "982cface-5bc8-4eef-b498-02fe1fa491c1" (UID: "982cface-5bc8-4eef-b498-02fe1fa491c1"). InnerVolumeSpecName "kube-api-access-d5zln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.941610 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "982cface-5bc8-4eef-b498-02fe1fa491c1" (UID: "982cface-5bc8-4eef-b498-02fe1fa491c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.976988 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982cface-5bc8-4eef-b498-02fe1fa491c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:29 crc kubenswrapper[4922]: I0126 15:42:29.977037 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5zln\" (UniqueName: \"kubernetes.io/projected/982cface-5bc8-4eef-b498-02fe1fa491c1-kube-api-access-d5zln\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.231559 4922 generic.go:334] "Generic (PLEG): container finished" podID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerID="635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6" exitCode=0 Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.231658 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerDied","Data":"635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6"} Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.231717 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlk8d" event={"ID":"982cface-5bc8-4eef-b498-02fe1fa491c1","Type":"ContainerDied","Data":"99ed29856eefff0f14082c4ba33906debe173e59e09bc9506cc36236e06b6305"} Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.231734 4922 scope.go:117] "RemoveContainer" containerID="635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.231679 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlk8d" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.264284 4922 scope.go:117] "RemoveContainer" containerID="78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.269289 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlk8d"] Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.280602 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xlk8d"] Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.297882 4922 scope.go:117] "RemoveContainer" containerID="d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.347337 4922 scope.go:117] "RemoveContainer" containerID="635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6" Jan 26 15:42:30 crc kubenswrapper[4922]: E0126 15:42:30.347924 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6\": container with ID starting with 635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6 not found: ID does not exist" containerID="635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.347967 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6"} err="failed to get container status \"635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6\": rpc error: code = NotFound desc = could not find container \"635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6\": container with ID starting with 635aba6548b60c1a432815e57cd92b58431dfe4231ebb4df4f3ea790cea9fff6 not found: ID does not exist" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.347994 4922 scope.go:117] "RemoveContainer" containerID="78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb" Jan 26 15:42:30 crc kubenswrapper[4922]: E0126 15:42:30.348463 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb\": container with ID starting with 78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb not found: ID does not exist" containerID="78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.348516 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb"} err="failed to get container status \"78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb\": rpc error: code = NotFound desc = could not find container \"78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb\": container with ID starting with 78a6fc305bf8e845881107e21cf6ea9c167721f70cf8efa2f71e0eacb62fccbb not found: ID does not exist" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.348537 4922 scope.go:117] "RemoveContainer" containerID="d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1" Jan 26 15:42:30 crc kubenswrapper[4922]: E0126 15:42:30.348786 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1\": container with ID starting with d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1 not found: ID does not exist" containerID="d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1" Jan 26 15:42:30 crc kubenswrapper[4922]: I0126 15:42:30.348820 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1"} err="failed to get container status \"d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1\": rpc error: code = NotFound desc = could not find container \"d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1\": container with ID starting with d17f9970a82048c945afdb8c536a26f387f5c89c8883376d2b6f911635ab8cb1 not found: ID does not exist" Jan 26 15:42:31 crc kubenswrapper[4922]: I0126 15:42:31.107111 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" path="/var/lib/kubelet/pods/982cface-5bc8-4eef-b498-02fe1fa491c1/volumes" Jan 26 15:44:11 crc kubenswrapper[4922]: I0126 15:44:11.306405 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:44:11 crc kubenswrapper[4922]: I0126 15:44:11.306925 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:41 crc kubenswrapper[4922]: I0126 15:44:41.306862 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:44:41 crc kubenswrapper[4922]: I0126 15:44:41.307481 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.159442 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl"] Jan 26 15:45:00 crc kubenswrapper[4922]: E0126 15:45:00.162152 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="extract-utilities" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.162187 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="extract-utilities" Jan 26 15:45:00 crc kubenswrapper[4922]: E0126 15:45:00.162205 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="extract-content" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.162216 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="extract-content" Jan 26 15:45:00 crc kubenswrapper[4922]: E0126 15:45:00.162235 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="registry-server" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.162243 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="registry-server" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.162561 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="982cface-5bc8-4eef-b498-02fe1fa491c1" containerName="registry-server" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.188052 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl"] Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.188236 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.190263 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.190586 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.249406 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrm2\" (UniqueName: \"kubernetes.io/projected/cb3bdb19-4804-491e-97cd-d4c3d168b276-kube-api-access-9rrm2\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.249557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb3bdb19-4804-491e-97cd-d4c3d168b276-secret-volume\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.249687 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb3bdb19-4804-491e-97cd-d4c3d168b276-config-volume\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.352180 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrm2\" (UniqueName: \"kubernetes.io/projected/cb3bdb19-4804-491e-97cd-d4c3d168b276-kube-api-access-9rrm2\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.352273 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb3bdb19-4804-491e-97cd-d4c3d168b276-secret-volume\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.352372 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb3bdb19-4804-491e-97cd-d4c3d168b276-config-volume\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.353355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb3bdb19-4804-491e-97cd-d4c3d168b276-config-volume\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.362138 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb3bdb19-4804-491e-97cd-d4c3d168b276-secret-volume\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.376441 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrm2\" (UniqueName: \"kubernetes.io/projected/cb3bdb19-4804-491e-97cd-d4c3d168b276-kube-api-access-9rrm2\") pod \"collect-profiles-29490705-zxnfl\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:00 crc kubenswrapper[4922]: I0126 15:45:00.509567 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:01 crc kubenswrapper[4922]: I0126 15:45:01.048189 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl"] Jan 26 15:45:01 crc kubenswrapper[4922]: I0126 15:45:01.712651 4922 generic.go:334] "Generic (PLEG): container finished" podID="cb3bdb19-4804-491e-97cd-d4c3d168b276" containerID="edac7a0742cf69b2f0ce883f552a91b42bfe63a5df23266410d8e2de02663857" exitCode=0 Jan 26 15:45:01 crc kubenswrapper[4922]: I0126 15:45:01.712993 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" event={"ID":"cb3bdb19-4804-491e-97cd-d4c3d168b276","Type":"ContainerDied","Data":"edac7a0742cf69b2f0ce883f552a91b42bfe63a5df23266410d8e2de02663857"} Jan 26 15:45:01 crc kubenswrapper[4922]: I0126 15:45:01.713059 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" event={"ID":"cb3bdb19-4804-491e-97cd-d4c3d168b276","Type":"ContainerStarted","Data":"5e64997e7132cb25e92107bd4a16bd90500ec1ccf303d9f4b80636151c609b6e"} Jan 26 15:45:02 crc kubenswrapper[4922]: I0126 15:45:02.725154 4922 generic.go:334] "Generic (PLEG): container finished" podID="29bf7bdf-8c0e-4e1c-812d-1220cc968575" containerID="48a1f50f4376e429475766362060f0e965e244f7e3df4999812311e773d1e25b" exitCode=0 Jan 26 15:45:02 crc kubenswrapper[4922]: I0126 15:45:02.725216 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"29bf7bdf-8c0e-4e1c-812d-1220cc968575","Type":"ContainerDied","Data":"48a1f50f4376e429475766362060f0e965e244f7e3df4999812311e773d1e25b"} Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.130521 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.327160 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb3bdb19-4804-491e-97cd-d4c3d168b276-secret-volume\") pod \"cb3bdb19-4804-491e-97cd-d4c3d168b276\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.327276 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rrm2\" (UniqueName: \"kubernetes.io/projected/cb3bdb19-4804-491e-97cd-d4c3d168b276-kube-api-access-9rrm2\") pod \"cb3bdb19-4804-491e-97cd-d4c3d168b276\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.327360 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb3bdb19-4804-491e-97cd-d4c3d168b276-config-volume\") pod \"cb3bdb19-4804-491e-97cd-d4c3d168b276\" (UID: \"cb3bdb19-4804-491e-97cd-d4c3d168b276\") " Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.329096 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3bdb19-4804-491e-97cd-d4c3d168b276-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb3bdb19-4804-491e-97cd-d4c3d168b276" (UID: "cb3bdb19-4804-491e-97cd-d4c3d168b276"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.337447 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3bdb19-4804-491e-97cd-d4c3d168b276-kube-api-access-9rrm2" (OuterVolumeSpecName: "kube-api-access-9rrm2") pod "cb3bdb19-4804-491e-97cd-d4c3d168b276" (UID: "cb3bdb19-4804-491e-97cd-d4c3d168b276"). InnerVolumeSpecName "kube-api-access-9rrm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.337942 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3bdb19-4804-491e-97cd-d4c3d168b276-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb3bdb19-4804-491e-97cd-d4c3d168b276" (UID: "cb3bdb19-4804-491e-97cd-d4c3d168b276"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.429809 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb3bdb19-4804-491e-97cd-d4c3d168b276-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.429852 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rrm2\" (UniqueName: \"kubernetes.io/projected/cb3bdb19-4804-491e-97cd-d4c3d168b276-kube-api-access-9rrm2\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.429863 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb3bdb19-4804-491e-97cd-d4c3d168b276-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.735562 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" event={"ID":"cb3bdb19-4804-491e-97cd-d4c3d168b276","Type":"ContainerDied","Data":"5e64997e7132cb25e92107bd4a16bd90500ec1ccf303d9f4b80636151c609b6e"} Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.735869 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e64997e7132cb25e92107bd4a16bd90500ec1ccf303d9f4b80636151c609b6e" Jan 26 15:45:03 crc kubenswrapper[4922]: I0126 15:45:03.735595 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-zxnfl" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.054431 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.209042 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf"] Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.218851 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-4qpnf"] Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.243933 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.243986 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config-secret\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244076 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-temporary\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244183 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gdvs\" (UniqueName: \"kubernetes.io/projected/29bf7bdf-8c0e-4e1c-812d-1220cc968575-kube-api-access-5gdvs\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244257 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-workdir\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244309 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244332 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ca-certs\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244352 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-config-data\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.244909 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.245116 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ssh-key\") pod \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\" (UID: \"29bf7bdf-8c0e-4e1c-812d-1220cc968575\") " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.245205 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-config-data" (OuterVolumeSpecName: "config-data") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.246177 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.246203 4922 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.249368 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bf7bdf-8c0e-4e1c-812d-1220cc968575-kube-api-access-5gdvs" (OuterVolumeSpecName: "kube-api-access-5gdvs") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "kube-api-access-5gdvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.252805 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.269234 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.276939 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.277490 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.298720 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.309631 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "29bf7bdf-8c0e-4e1c-812d-1220cc968575" (UID: "29bf7bdf-8c0e-4e1c-812d-1220cc968575"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347781 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347826 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347838 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gdvs\" (UniqueName: \"kubernetes.io/projected/29bf7bdf-8c0e-4e1c-812d-1220cc968575-kube-api-access-5gdvs\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347849 4922 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/29bf7bdf-8c0e-4e1c-812d-1220cc968575-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347887 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347901 4922 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.347913 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29bf7bdf-8c0e-4e1c-812d-1220cc968575-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.370389 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.450158 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.749623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"29bf7bdf-8c0e-4e1c-812d-1220cc968575","Type":"ContainerDied","Data":"ddf9a39aa7074596d003517e1c797a726b555b3d043a435a2450c47755fa4074"} Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.750874 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf9a39aa7074596d003517e1c797a726b555b3d043a435a2450c47755fa4074" Jan 26 15:45:04 crc kubenswrapper[4922]: I0126 15:45:04.750982 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 15:45:05 crc kubenswrapper[4922]: I0126 15:45:05.108467 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a531389-b894-4e97-b997-c115d5e393e8" path="/var/lib/kubelet/pods/0a531389-b894-4e97-b997-c115d5e393e8/volumes" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.925994 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 15:45:09 crc kubenswrapper[4922]: E0126 15:45:09.927094 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bf7bdf-8c0e-4e1c-812d-1220cc968575" containerName="tempest-tests-tempest-tests-runner" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.927112 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bf7bdf-8c0e-4e1c-812d-1220cc968575" containerName="tempest-tests-tempest-tests-runner" Jan 26 15:45:09 crc kubenswrapper[4922]: E0126 15:45:09.927125 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3bdb19-4804-491e-97cd-d4c3d168b276" containerName="collect-profiles" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.927132 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3bdb19-4804-491e-97cd-d4c3d168b276" containerName="collect-profiles" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.927402 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bf7bdf-8c0e-4e1c-812d-1220cc968575" containerName="tempest-tests-tempest-tests-runner" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.927426 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3bdb19-4804-491e-97cd-d4c3d168b276" containerName="collect-profiles" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.928391 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.930687 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g7sds" Jan 26 15:45:09 crc kubenswrapper[4922]: I0126 15:45:09.937575 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.064709 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.064836 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27xsw\" (UniqueName: \"kubernetes.io/projected/b8a7c013-fdc7-4f64-b17c-b48b89eda7f6-kube-api-access-27xsw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.166336 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27xsw\" (UniqueName: \"kubernetes.io/projected/b8a7c013-fdc7-4f64-b17c-b48b89eda7f6-kube-api-access-27xsw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.166877 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.167305 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.202638 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27xsw\" (UniqueName: \"kubernetes.io/projected/b8a7c013-fdc7-4f64-b17c-b48b89eda7f6-kube-api-access-27xsw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.203921 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.253353 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.739031 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.742234 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:45:10 crc kubenswrapper[4922]: I0126 15:45:10.831629 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6","Type":"ContainerStarted","Data":"b8ec616c5ee3bc0da449b7ee01b018e7741df2bcb1d71a2ca85549fe6f711a9c"} Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.306590 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.307791 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.307954 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.308926 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.309103 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" gracePeriod=600 Jan 26 15:45:11 crc kubenswrapper[4922]: E0126 15:45:11.521879 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.841433 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b8a7c013-fdc7-4f64-b17c-b48b89eda7f6","Type":"ContainerStarted","Data":"a96bd3cc4b6d486346203aa7ae63d998f27101e2e9d1fda0041da63a306d3b47"} Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.844931 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" exitCode=0 Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.844977 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353"} Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.845010 4922 scope.go:117] "RemoveContainer" containerID="30f2f52da32c072f15496ac9a2d78ed8632070590edf42b4139a212a500d3460" Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.845469 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:45:11 crc kubenswrapper[4922]: E0126 15:45:11.845748 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:45:11 crc kubenswrapper[4922]: I0126 15:45:11.881817 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.055091579 podStartE2EDuration="2.881789989s" podCreationTimestamp="2026-01-26 15:45:09 +0000 UTC" firstStartedPulling="2026-01-26 15:45:10.742023185 +0000 UTC m=+5727.944285957" lastFinishedPulling="2026-01-26 15:45:11.568721595 +0000 UTC m=+5728.770984367" observedRunningTime="2026-01-26 15:45:11.857207512 +0000 UTC m=+5729.059470284" watchObservedRunningTime="2026-01-26 15:45:11.881789989 +0000 UTC m=+5729.084052781" Jan 26 15:45:21 crc kubenswrapper[4922]: I0126 15:45:21.519552 4922 scope.go:117] "RemoveContainer" containerID="78d5b60a8be503da0b3149b32a0380cdc43dbea58edea13d6532b9607398719b" Jan 26 15:45:27 crc kubenswrapper[4922]: I0126 15:45:27.092829 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:45:27 crc kubenswrapper[4922]: E0126 15:45:27.093715 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:45:38 crc kubenswrapper[4922]: I0126 15:45:38.092851 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:45:38 crc kubenswrapper[4922]: E0126 15:45:38.093653 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.767356 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkzlp/must-gather-j446m"] Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.770029 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.773143 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zkzlp"/"kube-root-ca.crt" Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.774785 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zkzlp"/"openshift-service-ca.crt" Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.774876 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zkzlp"/"default-dockercfg-nnlbl" Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.777167 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zkzlp/must-gather-j446m"] Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.916975 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fchg\" (UniqueName: \"kubernetes.io/projected/664779e3-dd2a-4087-9a93-c964c0c2d869-kube-api-access-8fchg\") pod \"must-gather-j446m\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:41 crc kubenswrapper[4922]: I0126 15:45:41.917413 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/664779e3-dd2a-4087-9a93-c964c0c2d869-must-gather-output\") pod \"must-gather-j446m\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:42 crc kubenswrapper[4922]: I0126 15:45:42.019218 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/664779e3-dd2a-4087-9a93-c964c0c2d869-must-gather-output\") pod \"must-gather-j446m\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:42 crc kubenswrapper[4922]: I0126 15:45:42.019325 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fchg\" (UniqueName: \"kubernetes.io/projected/664779e3-dd2a-4087-9a93-c964c0c2d869-kube-api-access-8fchg\") pod \"must-gather-j446m\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:42 crc kubenswrapper[4922]: I0126 15:45:42.019645 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/664779e3-dd2a-4087-9a93-c964c0c2d869-must-gather-output\") pod \"must-gather-j446m\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:42 crc kubenswrapper[4922]: I0126 15:45:42.038374 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fchg\" (UniqueName: \"kubernetes.io/projected/664779e3-dd2a-4087-9a93-c964c0c2d869-kube-api-access-8fchg\") pod \"must-gather-j446m\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:42 crc kubenswrapper[4922]: I0126 15:45:42.090102 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:45:42 crc kubenswrapper[4922]: I0126 15:45:42.599411 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zkzlp/must-gather-j446m"] Jan 26 15:45:43 crc kubenswrapper[4922]: I0126 15:45:43.229282 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/must-gather-j446m" event={"ID":"664779e3-dd2a-4087-9a93-c964c0c2d869","Type":"ContainerStarted","Data":"42d95521f8c31d53f3d536aed32d431997bbfda371dbbaa6141c771b1c442748"} Jan 26 15:45:51 crc kubenswrapper[4922]: I0126 15:45:51.322016 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/must-gather-j446m" event={"ID":"664779e3-dd2a-4087-9a93-c964c0c2d869","Type":"ContainerStarted","Data":"a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a"} Jan 26 15:45:51 crc kubenswrapper[4922]: I0126 15:45:51.322464 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/must-gather-j446m" event={"ID":"664779e3-dd2a-4087-9a93-c964c0c2d869","Type":"ContainerStarted","Data":"dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1"} Jan 26 15:45:51 crc kubenswrapper[4922]: I0126 15:45:51.350201 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zkzlp/must-gather-j446m" podStartSLOduration=2.688747293 podStartE2EDuration="10.350176364s" podCreationTimestamp="2026-01-26 15:45:41 +0000 UTC" firstStartedPulling="2026-01-26 15:45:42.604294335 +0000 UTC m=+5759.806557107" lastFinishedPulling="2026-01-26 15:45:50.265723406 +0000 UTC m=+5767.467986178" observedRunningTime="2026-01-26 15:45:51.340150261 +0000 UTC m=+5768.542413063" watchObservedRunningTime="2026-01-26 15:45:51.350176364 +0000 UTC m=+5768.552439146" Jan 26 15:45:53 crc kubenswrapper[4922]: I0126 15:45:53.101364 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:45:53 crc kubenswrapper[4922]: E0126 15:45:53.101894 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:45:54 crc kubenswrapper[4922]: I0126 15:45:54.977918 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-76c2t"] Jan 26 15:45:54 crc kubenswrapper[4922]: I0126 15:45:54.979590 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.123824 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwh95\" (UniqueName: \"kubernetes.io/projected/99488fd3-f198-4099-b6b9-7894dc47a452-kube-api-access-rwh95\") pod \"crc-debug-76c2t\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.124589 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99488fd3-f198-4099-b6b9-7894dc47a452-host\") pod \"crc-debug-76c2t\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.226850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwh95\" (UniqueName: \"kubernetes.io/projected/99488fd3-f198-4099-b6b9-7894dc47a452-kube-api-access-rwh95\") pod \"crc-debug-76c2t\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.227115 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99488fd3-f198-4099-b6b9-7894dc47a452-host\") pod \"crc-debug-76c2t\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.227214 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99488fd3-f198-4099-b6b9-7894dc47a452-host\") pod \"crc-debug-76c2t\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.258272 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwh95\" (UniqueName: \"kubernetes.io/projected/99488fd3-f198-4099-b6b9-7894dc47a452-kube-api-access-rwh95\") pod \"crc-debug-76c2t\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.296902 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:45:55 crc kubenswrapper[4922]: I0126 15:45:55.369667 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" event={"ID":"99488fd3-f198-4099-b6b9-7894dc47a452","Type":"ContainerStarted","Data":"a0c8c0f876e0e8f14e5411350ce19bfd6cb85c7c14aa3d4e03946ccaa259e48d"} Jan 26 15:46:06 crc kubenswrapper[4922]: I0126 15:46:06.530707 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" event={"ID":"99488fd3-f198-4099-b6b9-7894dc47a452","Type":"ContainerStarted","Data":"b1031a6cb1f4c1289497f235d1a4f8ebd3e8b6f929c00b7da69970c5cb8ed7eb"} Jan 26 15:46:08 crc kubenswrapper[4922]: I0126 15:46:08.092715 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:46:08 crc kubenswrapper[4922]: E0126 15:46:08.093475 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:46:21 crc kubenswrapper[4922]: I0126 15:46:21.092674 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:46:21 crc kubenswrapper[4922]: E0126 15:46:21.093626 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:46:32 crc kubenswrapper[4922]: I0126 15:46:32.092978 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:46:32 crc kubenswrapper[4922]: E0126 15:46:32.093924 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:46:45 crc kubenswrapper[4922]: I0126 15:46:45.092694 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:46:45 crc kubenswrapper[4922]: E0126 15:46:45.093626 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:46:57 crc kubenswrapper[4922]: I0126 15:46:57.003757 4922 generic.go:334] "Generic (PLEG): container finished" podID="99488fd3-f198-4099-b6b9-7894dc47a452" containerID="b1031a6cb1f4c1289497f235d1a4f8ebd3e8b6f929c00b7da69970c5cb8ed7eb" exitCode=0 Jan 26 15:46:57 crc kubenswrapper[4922]: I0126 15:46:57.003845 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" event={"ID":"99488fd3-f198-4099-b6b9-7894dc47a452","Type":"ContainerDied","Data":"b1031a6cb1f4c1289497f235d1a4f8ebd3e8b6f929c00b7da69970c5cb8ed7eb"} Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.175479 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.232907 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-76c2t"] Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.248704 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-76c2t"] Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.271424 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwh95\" (UniqueName: \"kubernetes.io/projected/99488fd3-f198-4099-b6b9-7894dc47a452-kube-api-access-rwh95\") pod \"99488fd3-f198-4099-b6b9-7894dc47a452\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.271491 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99488fd3-f198-4099-b6b9-7894dc47a452-host\") pod \"99488fd3-f198-4099-b6b9-7894dc47a452\" (UID: \"99488fd3-f198-4099-b6b9-7894dc47a452\") " Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.271673 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99488fd3-f198-4099-b6b9-7894dc47a452-host" (OuterVolumeSpecName: "host") pod "99488fd3-f198-4099-b6b9-7894dc47a452" (UID: "99488fd3-f198-4099-b6b9-7894dc47a452"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.272107 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99488fd3-f198-4099-b6b9-7894dc47a452-host\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.279138 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99488fd3-f198-4099-b6b9-7894dc47a452-kube-api-access-rwh95" (OuterVolumeSpecName: "kube-api-access-rwh95") pod "99488fd3-f198-4099-b6b9-7894dc47a452" (UID: "99488fd3-f198-4099-b6b9-7894dc47a452"). InnerVolumeSpecName "kube-api-access-rwh95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:46:58 crc kubenswrapper[4922]: I0126 15:46:58.373996 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwh95\" (UniqueName: \"kubernetes.io/projected/99488fd3-f198-4099-b6b9-7894dc47a452-kube-api-access-rwh95\") on node \"crc\" DevicePath \"\"" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.022373 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c8c0f876e0e8f14e5411350ce19bfd6cb85c7c14aa3d4e03946ccaa259e48d" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.022418 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-76c2t" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.093004 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:46:59 crc kubenswrapper[4922]: E0126 15:46:59.093459 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.107811 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99488fd3-f198-4099-b6b9-7894dc47a452" path="/var/lib/kubelet/pods/99488fd3-f198-4099-b6b9-7894dc47a452/volumes" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.430156 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-4nbwr"] Jan 26 15:46:59 crc kubenswrapper[4922]: E0126 15:46:59.430586 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99488fd3-f198-4099-b6b9-7894dc47a452" containerName="container-00" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.430599 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="99488fd3-f198-4099-b6b9-7894dc47a452" containerName="container-00" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.430801 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="99488fd3-f198-4099-b6b9-7894dc47a452" containerName="container-00" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.433310 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.500161 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg464\" (UniqueName: \"kubernetes.io/projected/6f676af2-5696-41ef-a331-ae68d21b7134-kube-api-access-mg464\") pod \"crc-debug-4nbwr\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.500380 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f676af2-5696-41ef-a331-ae68d21b7134-host\") pod \"crc-debug-4nbwr\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.602783 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg464\" (UniqueName: \"kubernetes.io/projected/6f676af2-5696-41ef-a331-ae68d21b7134-kube-api-access-mg464\") pod \"crc-debug-4nbwr\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.602974 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f676af2-5696-41ef-a331-ae68d21b7134-host\") pod \"crc-debug-4nbwr\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.603145 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f676af2-5696-41ef-a331-ae68d21b7134-host\") pod \"crc-debug-4nbwr\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.620480 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg464\" (UniqueName: \"kubernetes.io/projected/6f676af2-5696-41ef-a331-ae68d21b7134-kube-api-access-mg464\") pod \"crc-debug-4nbwr\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:46:59 crc kubenswrapper[4922]: I0126 15:46:59.751812 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:47:00 crc kubenswrapper[4922]: I0126 15:47:00.033590 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" event={"ID":"6f676af2-5696-41ef-a331-ae68d21b7134","Type":"ContainerStarted","Data":"461f7d0c60fc5193c9b22ec5d2871a1d0b2c2806f250f17aa64dc91445e166c1"} Jan 26 15:47:01 crc kubenswrapper[4922]: I0126 15:47:01.072050 4922 generic.go:334] "Generic (PLEG): container finished" podID="6f676af2-5696-41ef-a331-ae68d21b7134" containerID="8893047a1c4f63814c20a12363787686beca44f394e7aa3e9d9faffa3498a17c" exitCode=0 Jan 26 15:47:01 crc kubenswrapper[4922]: I0126 15:47:01.072128 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" event={"ID":"6f676af2-5696-41ef-a331-ae68d21b7134","Type":"ContainerDied","Data":"8893047a1c4f63814c20a12363787686beca44f394e7aa3e9d9faffa3498a17c"} Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.282691 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.455442 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg464\" (UniqueName: \"kubernetes.io/projected/6f676af2-5696-41ef-a331-ae68d21b7134-kube-api-access-mg464\") pod \"6f676af2-5696-41ef-a331-ae68d21b7134\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.456131 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f676af2-5696-41ef-a331-ae68d21b7134-host\") pod \"6f676af2-5696-41ef-a331-ae68d21b7134\" (UID: \"6f676af2-5696-41ef-a331-ae68d21b7134\") " Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.456494 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f676af2-5696-41ef-a331-ae68d21b7134-host" (OuterVolumeSpecName: "host") pod "6f676af2-5696-41ef-a331-ae68d21b7134" (UID: "6f676af2-5696-41ef-a331-ae68d21b7134"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.457290 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f676af2-5696-41ef-a331-ae68d21b7134-host\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.482298 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f676af2-5696-41ef-a331-ae68d21b7134-kube-api-access-mg464" (OuterVolumeSpecName: "kube-api-access-mg464") pod "6f676af2-5696-41ef-a331-ae68d21b7134" (UID: "6f676af2-5696-41ef-a331-ae68d21b7134"). InnerVolumeSpecName "kube-api-access-mg464". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:02 crc kubenswrapper[4922]: I0126 15:47:02.558941 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg464\" (UniqueName: \"kubernetes.io/projected/6f676af2-5696-41ef-a331-ae68d21b7134-kube-api-access-mg464\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:03 crc kubenswrapper[4922]: I0126 15:47:03.104712 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" Jan 26 15:47:03 crc kubenswrapper[4922]: I0126 15:47:03.106358 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-4nbwr" event={"ID":"6f676af2-5696-41ef-a331-ae68d21b7134","Type":"ContainerDied","Data":"461f7d0c60fc5193c9b22ec5d2871a1d0b2c2806f250f17aa64dc91445e166c1"} Jan 26 15:47:03 crc kubenswrapper[4922]: I0126 15:47:03.106399 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461f7d0c60fc5193c9b22ec5d2871a1d0b2c2806f250f17aa64dc91445e166c1" Jan 26 15:47:03 crc kubenswrapper[4922]: E0126 15:47:03.346541 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f676af2_5696_41ef_a331_ae68d21b7134.slice/crio-461f7d0c60fc5193c9b22ec5d2871a1d0b2c2806f250f17aa64dc91445e166c1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f676af2_5696_41ef_a331_ae68d21b7134.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:47:03 crc kubenswrapper[4922]: I0126 15:47:03.542548 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-4nbwr"] Jan 26 15:47:03 crc kubenswrapper[4922]: I0126 15:47:03.556501 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-4nbwr"] Jan 26 15:47:04 crc kubenswrapper[4922]: I0126 15:47:04.713772 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-gc9dh"] Jan 26 15:47:04 crc kubenswrapper[4922]: E0126 15:47:04.714190 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f676af2-5696-41ef-a331-ae68d21b7134" containerName="container-00" Jan 26 15:47:04 crc kubenswrapper[4922]: I0126 15:47:04.714230 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f676af2-5696-41ef-a331-ae68d21b7134" containerName="container-00" Jan 26 15:47:04 crc kubenswrapper[4922]: I0126 15:47:04.714510 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f676af2-5696-41ef-a331-ae68d21b7134" containerName="container-00" Jan 26 15:47:04 crc kubenswrapper[4922]: I0126 15:47:04.715258 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:04 crc kubenswrapper[4922]: I0126 15:47:04.905155 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj44j\" (UniqueName: \"kubernetes.io/projected/81435ade-e853-4dfc-8616-d9a915a67365-kube-api-access-tj44j\") pod \"crc-debug-gc9dh\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:04 crc kubenswrapper[4922]: I0126 15:47:04.905341 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81435ade-e853-4dfc-8616-d9a915a67365-host\") pod \"crc-debug-gc9dh\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.007269 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81435ade-e853-4dfc-8616-d9a915a67365-host\") pod \"crc-debug-gc9dh\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.007446 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj44j\" (UniqueName: \"kubernetes.io/projected/81435ade-e853-4dfc-8616-d9a915a67365-kube-api-access-tj44j\") pod \"crc-debug-gc9dh\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.007891 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81435ade-e853-4dfc-8616-d9a915a67365-host\") pod \"crc-debug-gc9dh\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.029722 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj44j\" (UniqueName: \"kubernetes.io/projected/81435ade-e853-4dfc-8616-d9a915a67365-kube-api-access-tj44j\") pod \"crc-debug-gc9dh\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.033166 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.107116 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f676af2-5696-41ef-a331-ae68d21b7134" path="/var/lib/kubelet/pods/6f676af2-5696-41ef-a331-ae68d21b7134/volumes" Jan 26 15:47:05 crc kubenswrapper[4922]: I0126 15:47:05.115380 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" event={"ID":"81435ade-e853-4dfc-8616-d9a915a67365","Type":"ContainerStarted","Data":"0a7790fb95da56733891961d1fd05d3bd6ac6931902f4eee7a316757d9d19eff"} Jan 26 15:47:06 crc kubenswrapper[4922]: I0126 15:47:06.125597 4922 generic.go:334] "Generic (PLEG): container finished" podID="81435ade-e853-4dfc-8616-d9a915a67365" containerID="bc72dc16dc52e4f2a84cab4951ec7c29ba6a62ade9233f3be99d73c96d42d6d6" exitCode=0 Jan 26 15:47:06 crc kubenswrapper[4922]: I0126 15:47:06.125688 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" event={"ID":"81435ade-e853-4dfc-8616-d9a915a67365","Type":"ContainerDied","Data":"bc72dc16dc52e4f2a84cab4951ec7c29ba6a62ade9233f3be99d73c96d42d6d6"} Jan 26 15:47:06 crc kubenswrapper[4922]: I0126 15:47:06.166556 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-gc9dh"] Jan 26 15:47:06 crc kubenswrapper[4922]: I0126 15:47:06.175181 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkzlp/crc-debug-gc9dh"] Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.260287 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.349670 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj44j\" (UniqueName: \"kubernetes.io/projected/81435ade-e853-4dfc-8616-d9a915a67365-kube-api-access-tj44j\") pod \"81435ade-e853-4dfc-8616-d9a915a67365\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.349885 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81435ade-e853-4dfc-8616-d9a915a67365-host\") pod \"81435ade-e853-4dfc-8616-d9a915a67365\" (UID: \"81435ade-e853-4dfc-8616-d9a915a67365\") " Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.350545 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81435ade-e853-4dfc-8616-d9a915a67365-host" (OuterVolumeSpecName: "host") pod "81435ade-e853-4dfc-8616-d9a915a67365" (UID: "81435ade-e853-4dfc-8616-d9a915a67365"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.350793 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81435ade-e853-4dfc-8616-d9a915a67365-host\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.362752 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81435ade-e853-4dfc-8616-d9a915a67365-kube-api-access-tj44j" (OuterVolumeSpecName: "kube-api-access-tj44j") pod "81435ade-e853-4dfc-8616-d9a915a67365" (UID: "81435ade-e853-4dfc-8616-d9a915a67365"). InnerVolumeSpecName "kube-api-access-tj44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:07 crc kubenswrapper[4922]: I0126 15:47:07.452900 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj44j\" (UniqueName: \"kubernetes.io/projected/81435ade-e853-4dfc-8616-d9a915a67365-kube-api-access-tj44j\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:08 crc kubenswrapper[4922]: I0126 15:47:08.145669 4922 scope.go:117] "RemoveContainer" containerID="bc72dc16dc52e4f2a84cab4951ec7c29ba6a62ade9233f3be99d73c96d42d6d6" Jan 26 15:47:08 crc kubenswrapper[4922]: I0126 15:47:08.146358 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/crc-debug-gc9dh" Jan 26 15:47:09 crc kubenswrapper[4922]: I0126 15:47:09.104276 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81435ade-e853-4dfc-8616-d9a915a67365" path="/var/lib/kubelet/pods/81435ade-e853-4dfc-8616-d9a915a67365/volumes" Jan 26 15:47:14 crc kubenswrapper[4922]: I0126 15:47:14.092043 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:47:14 crc kubenswrapper[4922]: E0126 15:47:14.092887 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:47:29 crc kubenswrapper[4922]: I0126 15:47:29.093327 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:47:29 crc kubenswrapper[4922]: E0126 15:47:29.094550 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.247269 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75bc76c88b-b6znr_5ed1bf50-3aec-40cc-843f-afe6a0b2027d/barbican-api/0.log" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.444586 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75bc76c88b-b6znr_5ed1bf50-3aec-40cc-843f-afe6a0b2027d/barbican-api-log/0.log" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.534124 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f88dc4bbb-qt8fk_94ec145d-ce40-473f-8598-dbf02d89cc44/barbican-keystone-listener/0.log" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.673743 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f88dc4bbb-qt8fk_94ec145d-ce40-473f-8598-dbf02d89cc44/barbican-keystone-listener-log/0.log" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.702082 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bff9847b7-bs5nf_857d24cb-db5e-45ef-9b8a-025ee81b0083/barbican-worker/0.log" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.790922 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bff9847b7-bs5nf_857d24cb-db5e-45ef-9b8a-025ee81b0083/barbican-worker-log/0.log" Jan 26 15:47:33 crc kubenswrapper[4922]: I0126 15:47:33.970266 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65_c6728b4b-8be0-4841-bbd4-0832817d537e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.137923 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/ceilometer-central-agent/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.206371 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/ceilometer-notification-agent/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.210820 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/sg-core/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.248724 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/proxy-httpd/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.445811 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2408f586-2d21-49ee-a728-08b3190483b8/cinder-api-log/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.725619 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_643689f7-b9d6-4f8a-a41b-a2a473973bd2/probe/0.log" Jan 26 15:47:34 crc kubenswrapper[4922]: I0126 15:47:34.952691 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_643689f7-b9d6-4f8a-a41b-a2a473973bd2/cinder-backup/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.014895 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9456dfb4-60f4-440a-b11c-aef57ca86762/cinder-scheduler/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.136212 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2408f586-2d21-49ee-a728-08b3190483b8/cinder-api/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.149546 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9456dfb4-60f4-440a-b11c-aef57ca86762/probe/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.388479 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_25f323f8-db37-45ee-8db5-e3248826d64e/probe/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.401295 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_25f323f8-db37-45ee-8db5-e3248826d64e/cinder-volume/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.604431 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24f527f0-1574-4733-8102-e412468ad8a6/probe/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.625480 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24f527f0-1574-4733-8102-e412468ad8a6/cinder-volume/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.675803 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8_967a08bd-ab17-442c-bc7f-0a37ecd86306/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.829023 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xsptg_cd6ac053-8747-40cb-87df-2ad523dafbf0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:35 crc kubenswrapper[4922]: I0126 15:47:35.962491 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78d579bbd7-jvjv8_deee4578-fc2b-4162-b39e-012e9b6b2e8a/init/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.113470 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78d579bbd7-jvjv8_deee4578-fc2b-4162-b39e-012e9b6b2e8a/init/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.200948 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gcv99_e8749ec8-770d-498f-9ace-ad44e3385a36/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.311044 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78d579bbd7-jvjv8_deee4578-fc2b-4162-b39e-012e9b6b2e8a/dnsmasq-dns/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.399203 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a055cb9-0f37-4772-a2af-63c1517cb256/glance-log/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.492162 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a055cb9-0f37-4772-a2af-63c1517cb256/glance-httpd/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.646787 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9/glance-httpd/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.679270 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9/glance-log/0.log" Jan 26 15:47:36 crc kubenswrapper[4922]: I0126 15:47:36.957763 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c779658fd-pldff_0c995c1b-6b75-4638-a5d7-1df1539dcaeb/horizon/0.log" Jan 26 15:47:37 crc kubenswrapper[4922]: I0126 15:47:37.036475 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw_dfdf6694-c807-448c-beed-03053e451f2b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:37 crc kubenswrapper[4922]: I0126 15:47:37.225762 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rl9qs_ef2f11f3-ab6f-449f-9bf8-1306119e67ad/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:37 crc kubenswrapper[4922]: I0126 15:47:37.600799 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c779658fd-pldff_0c995c1b-6b75-4638-a5d7-1df1539dcaeb/horizon-log/0.log" Jan 26 15:47:37 crc kubenswrapper[4922]: I0126 15:47:37.619025 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490661-z2dhr_a1e08b81-5e31-4556-93f4-06430fed0f54/keystone-cron/0.log" Jan 26 15:47:37 crc kubenswrapper[4922]: I0126 15:47:37.871549 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_131b28f9-a3ee-401d-a4e0-f66ec118f156/kube-state-metrics/0.log" Jan 26 15:47:38 crc kubenswrapper[4922]: I0126 15:47:38.017844 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cd8b7c676-rg4sd_aead8c46-9f8b-45dd-9561-b320c5c7bde4/keystone-api/0.log" Jan 26 15:47:38 crc kubenswrapper[4922]: I0126 15:47:38.037122 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4_eb0a3861-3e56-4795-a6b3-48870bdf183a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:38 crc kubenswrapper[4922]: I0126 15:47:38.520007 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2_dd09600e-1e19-4a04-8e03-12312a20e513/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:38 crc kubenswrapper[4922]: I0126 15:47:38.540613 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cdcc7857-fs8tr_f79e2698-4080-4a22-8110-89e8c7217018/neutron-httpd/0.log" Jan 26 15:47:38 crc kubenswrapper[4922]: I0126 15:47:38.652107 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cdcc7857-fs8tr_f79e2698-4080-4a22-8110-89e8c7217018/neutron-api/0.log" Jan 26 15:47:39 crc kubenswrapper[4922]: I0126 15:47:39.170121 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9d39f315-a7ea-4004-a187-649a4ff3846b/nova-cell0-conductor-conductor/0.log" Jan 26 15:47:39 crc kubenswrapper[4922]: I0126 15:47:39.515718 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b8407196-4ae4-4db4-95d5-3498f9503f5e/nova-cell1-conductor-conductor/0.log" Jan 26 15:47:39 crc kubenswrapper[4922]: I0126 15:47:39.821336 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f83d6fb5-2b7a-4982-9719-3a03aa125f00/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 15:47:40 crc kubenswrapper[4922]: I0126 15:47:40.092712 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-kljvk_91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:40 crc kubenswrapper[4922]: I0126 15:47:40.132918 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec77602e-4cce-4d70-90ec-6d6adc5f6643/nova-api-log/0.log" Jan 26 15:47:40 crc kubenswrapper[4922]: I0126 15:47:40.389968 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d01bf414-0bdd-49f2-aa15-54f8ddb04d7b/nova-metadata-log/0.log" Jan 26 15:47:40 crc kubenswrapper[4922]: I0126 15:47:40.767093 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec77602e-4cce-4d70-90ec-6d6adc5f6643/nova-api-api/0.log" Jan 26 15:47:40 crc kubenswrapper[4922]: I0126 15:47:40.891017 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729a7732-744d-4ef7-b2c5-054f0f5f7f79/mysql-bootstrap/0.log" Jan 26 15:47:40 crc kubenswrapper[4922]: I0126 15:47:40.942433 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77/nova-scheduler-scheduler/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.072516 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729a7732-744d-4ef7-b2c5-054f0f5f7f79/galera/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.094670 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:47:41 crc kubenswrapper[4922]: E0126 15:47:41.095224 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.126919 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729a7732-744d-4ef7-b2c5-054f0f5f7f79/mysql-bootstrap/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.280989 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_205c6bf6-b838-4bea-9cf8-df9fe42bd53f/mysql-bootstrap/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.491663 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_205c6bf6-b838-4bea-9cf8-df9fe42bd53f/mysql-bootstrap/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.567612 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_205c6bf6-b838-4bea-9cf8-df9fe42bd53f/galera/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.764269 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c962db74-b70e-44df-a3d2-8a2dda688ca8/openstackclient/0.log" Jan 26 15:47:41 crc kubenswrapper[4922]: I0126 15:47:41.825093 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2nwnw_6d76ee55-7df1-42fb-817b-031f44d36f82/openstack-network-exporter/0.log" Jan 26 15:47:42 crc kubenswrapper[4922]: I0126 15:47:42.028815 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovsdb-server-init/0.log" Jan 26 15:47:42 crc kubenswrapper[4922]: I0126 15:47:42.286434 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovsdb-server-init/0.log" Jan 26 15:47:42 crc kubenswrapper[4922]: I0126 15:47:42.336108 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovsdb-server/0.log" Jan 26 15:47:42 crc kubenswrapper[4922]: I0126 15:47:42.803131 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovs-vswitchd/0.log" Jan 26 15:47:42 crc kubenswrapper[4922]: I0126 15:47:42.853082 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x4rqw_1a2c2044-5422-40dc-92f5-051f1da6b2a2/ovn-controller/0.log" Jan 26 15:47:42 crc kubenswrapper[4922]: I0126 15:47:42.995818 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d01bf414-0bdd-49f2-aa15-54f8ddb04d7b/nova-metadata-metadata/0.log" Jan 26 15:47:43 crc kubenswrapper[4922]: I0126 15:47:43.142905 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wvsjz_d6ff5d49-d748-41c2-9893-b3cd1fd09b2d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:43 crc kubenswrapper[4922]: I0126 15:47:43.293629 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_44db7ec1-3a40-46de-b048-94191897a988/openstack-network-exporter/0.log" Jan 26 15:47:43 crc kubenswrapper[4922]: I0126 15:47:43.413503 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_44db7ec1-3a40-46de-b048-94191897a988/ovn-northd/0.log" Jan 26 15:47:43 crc kubenswrapper[4922]: I0126 15:47:43.481857 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f5cedc59-0829-41da-94bd-17137258865f/openstack-network-exporter/0.log" Jan 26 15:47:43 crc kubenswrapper[4922]: I0126 15:47:43.614512 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f5cedc59-0829-41da-94bd-17137258865f/ovsdbserver-nb/0.log" Jan 26 15:47:43 crc kubenswrapper[4922]: I0126 15:47:43.842022 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6bc48070-5821-46c0-b06a-d50d64d22e19/ovsdbserver-sb/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.038159 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6bc48070-5821-46c0-b06a-d50d64d22e19/openstack-network-exporter/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.268418 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/init-config-reloader/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.331870 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4549854d-f2kpv_75199453-47fb-4d94-ae1d-908c20b64cfd/placement-api/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.407365 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4549854d-f2kpv_75199453-47fb-4d94-ae1d-908c20b64cfd/placement-log/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.569159 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/init-config-reloader/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.594235 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/config-reloader/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.642821 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/thanos-sidecar/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.673453 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/prometheus/0.log" Jan 26 15:47:44 crc kubenswrapper[4922]: I0126 15:47:44.835815 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34b7c66d-87b0-4db4-aa8c-7dd19293e8fd/setup-container/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.034683 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34b7c66d-87b0-4db4-aa8c-7dd19293e8fd/setup-container/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.133144 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_1881b31a-fd0f-40c8-a098-10888cec43db/setup-container/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.170939 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34b7c66d-87b0-4db4-aa8c-7dd19293e8fd/rabbitmq/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.427588 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_1881b31a-fd0f-40c8-a098-10888cec43db/rabbitmq/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.478867 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_1881b31a-fd0f-40c8-a098-10888cec43db/setup-container/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.545522 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e3a4bf42-9b24-473a-bca6-f81f1d0884fb/setup-container/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.771736 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e3a4bf42-9b24-473a-bca6-f81f1d0884fb/setup-container/0.log" Jan 26 15:47:45 crc kubenswrapper[4922]: I0126 15:47:45.816081 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e3a4bf42-9b24-473a-bca6-f81f1d0884fb/rabbitmq/0.log" Jan 26 15:47:46 crc kubenswrapper[4922]: I0126 15:47:46.226664 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d_e17ee626-9062-4bd3-8566-93e6160b89bc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:46 crc kubenswrapper[4922]: I0126 15:47:46.339794 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-l759l_ed501c23-2119-4ef8-9d37-776d3a5c5d6e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:46 crc kubenswrapper[4922]: I0126 15:47:46.542007 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc_650a3e49-f342-4b32-940a-2f64bdb45fb3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:46 crc kubenswrapper[4922]: I0126 15:47:46.667832 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-d9pm5_d0238a10-7400-4e82-ab24-d9f30ee2b02d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:46 crc kubenswrapper[4922]: I0126 15:47:46.790651 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mdksr_377f1114-a9f8-4b98-96c9-71f827483095/ssh-known-hosts-edpm-deployment/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.042253 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69b95496c5-qvg59_a2bcb723-e3e3-41f8-9704-10a1f8e78bd7/proxy-server/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.155722 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9mb5n_a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb/swift-ring-rebalance/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.254238 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69b95496c5-qvg59_a2bcb723-e3e3-41f8-9704-10a1f8e78bd7/proxy-httpd/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.327886 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-auditor/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.394976 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-reaper/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.587544 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-server/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.601722 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-auditor/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.670443 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-replicator/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.754757 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-replicator/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.849325 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-server/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.904514 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-updater/0.log" Jan 26 15:47:47 crc kubenswrapper[4922]: I0126 15:47:47.928902 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-auditor/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.022380 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-expirer/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.134414 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-replicator/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.183011 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-updater/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.247923 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/rsync/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.248676 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-server/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.369387 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/swift-recon-cron/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.539404 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8nddh_4eaec89f-007e-4ecf-a60f-f9f6729dfe13/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.631857 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9cb3b1de-0efe-4de9-9e48-6f2f6885c197/memcached/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.665563 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_29bf7bdf-8c0e-4e1c-812d-1220cc968575/tempest-tests-tempest-tests-runner/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.812393 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b8a7c013-fdc7-4f64-b17c-b48b89eda7f6/test-operator-logs-container/0.log" Jan 26 15:47:48 crc kubenswrapper[4922]: I0126 15:47:48.829286 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f_460930ff-ef82-4c8d-8f3b-36551f8fb401/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:47:49 crc kubenswrapper[4922]: I0126 15:47:49.467723 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_6d5cf795-cb42-4d01-8121-5ef71cedd729/watcher-applier/0.log" Jan 26 15:47:50 crc kubenswrapper[4922]: I0126 15:47:50.437033 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7b5e0a69-30c9-435f-a566-b97de4e1b850/watcher-api-log/0.log" Jan 26 15:47:52 crc kubenswrapper[4922]: I0126 15:47:52.402364 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2/watcher-decision-engine/0.log" Jan 26 15:47:53 crc kubenswrapper[4922]: I0126 15:47:53.095981 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:47:53 crc kubenswrapper[4922]: E0126 15:47:53.096308 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:47:53 crc kubenswrapper[4922]: I0126 15:47:53.662281 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7b5e0a69-30c9-435f-a566-b97de4e1b850/watcher-api/0.log" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.354832 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wvdfj"] Jan 26 15:47:55 crc kubenswrapper[4922]: E0126 15:47:55.355633 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81435ade-e853-4dfc-8616-d9a915a67365" containerName="container-00" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.355647 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="81435ade-e853-4dfc-8616-d9a915a67365" containerName="container-00" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.355868 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="81435ade-e853-4dfc-8616-d9a915a67365" containerName="container-00" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.357490 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.377048 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvdfj"] Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.393511 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-catalog-content\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.393605 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtf2\" (UniqueName: \"kubernetes.io/projected/0f9dbdf8-7328-4273-bbfa-72391b9685b8-kube-api-access-xbtf2\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.393657 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-utilities\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.495629 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtf2\" (UniqueName: \"kubernetes.io/projected/0f9dbdf8-7328-4273-bbfa-72391b9685b8-kube-api-access-xbtf2\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.495700 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-utilities\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.495828 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-catalog-content\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.496340 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-catalog-content\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.496513 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-utilities\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.516824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtf2\" (UniqueName: \"kubernetes.io/projected/0f9dbdf8-7328-4273-bbfa-72391b9685b8-kube-api-access-xbtf2\") pod \"redhat-operators-wvdfj\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:55 crc kubenswrapper[4922]: I0126 15:47:55.709600 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:47:56 crc kubenswrapper[4922]: I0126 15:47:56.208269 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvdfj"] Jan 26 15:47:56 crc kubenswrapper[4922]: W0126 15:47:56.213183 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9dbdf8_7328_4273_bbfa_72391b9685b8.slice/crio-ea14283b2f3426aa5686b00043cbb213aa1ab98a6ddceb4ddca3b3deff0f99a0 WatchSource:0}: Error finding container ea14283b2f3426aa5686b00043cbb213aa1ab98a6ddceb4ddca3b3deff0f99a0: Status 404 returned error can't find the container with id ea14283b2f3426aa5686b00043cbb213aa1ab98a6ddceb4ddca3b3deff0f99a0 Jan 26 15:47:56 crc kubenswrapper[4922]: I0126 15:47:56.687188 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerStarted","Data":"ea14283b2f3426aa5686b00043cbb213aa1ab98a6ddceb4ddca3b3deff0f99a0"} Jan 26 15:47:57 crc kubenswrapper[4922]: I0126 15:47:57.697730 4922 generic.go:334] "Generic (PLEG): container finished" podID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerID="a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee" exitCode=0 Jan 26 15:47:57 crc kubenswrapper[4922]: I0126 15:47:57.697820 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerDied","Data":"a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee"} Jan 26 15:48:00 crc kubenswrapper[4922]: I0126 15:48:00.730685 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerStarted","Data":"7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9"} Jan 26 15:48:05 crc kubenswrapper[4922]: I0126 15:48:05.791904 4922 generic.go:334] "Generic (PLEG): container finished" podID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerID="7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9" exitCode=0 Jan 26 15:48:05 crc kubenswrapper[4922]: I0126 15:48:05.791999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerDied","Data":"7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9"} Jan 26 15:48:06 crc kubenswrapper[4922]: I0126 15:48:06.814388 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerStarted","Data":"c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2"} Jan 26 15:48:06 crc kubenswrapper[4922]: I0126 15:48:06.835793 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wvdfj" podStartSLOduration=4.357321732 podStartE2EDuration="11.835775304s" podCreationTimestamp="2026-01-26 15:47:55 +0000 UTC" firstStartedPulling="2026-01-26 15:47:58.711051592 +0000 UTC m=+5895.913314364" lastFinishedPulling="2026-01-26 15:48:06.189505164 +0000 UTC m=+5903.391767936" observedRunningTime="2026-01-26 15:48:06.831523289 +0000 UTC m=+5904.033786061" watchObservedRunningTime="2026-01-26 15:48:06.835775304 +0000 UTC m=+5904.038038076" Jan 26 15:48:07 crc kubenswrapper[4922]: I0126 15:48:07.094004 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:48:07 crc kubenswrapper[4922]: E0126 15:48:07.094255 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:48:15 crc kubenswrapper[4922]: I0126 15:48:15.710532 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:48:15 crc kubenswrapper[4922]: I0126 15:48:15.711139 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:48:15 crc kubenswrapper[4922]: I0126 15:48:15.762426 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:48:15 crc kubenswrapper[4922]: I0126 15:48:15.946511 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:48:16 crc kubenswrapper[4922]: I0126 15:48:16.003955 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvdfj"] Jan 26 15:48:17 crc kubenswrapper[4922]: I0126 15:48:17.920911 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wvdfj" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="registry-server" containerID="cri-o://c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2" gracePeriod=2 Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.349060 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/util/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.372441 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/util/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.585909 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/pull/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.594116 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/pull/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.778169 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.778977 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/pull/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.818783 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/extract/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.884207 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/util/0.log" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.934056 4922 generic.go:334] "Generic (PLEG): container finished" podID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerID="c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2" exitCode=0 Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.935385 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvdfj" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.935367 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerDied","Data":"c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2"} Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.935622 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvdfj" event={"ID":"0f9dbdf8-7328-4273-bbfa-72391b9685b8","Type":"ContainerDied","Data":"ea14283b2f3426aa5686b00043cbb213aa1ab98a6ddceb4ddca3b3deff0f99a0"} Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.935646 4922 scope.go:117] "RemoveContainer" containerID="c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.963372 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-utilities\") pod \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.966611 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-catalog-content\") pod \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.969682 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtf2\" (UniqueName: \"kubernetes.io/projected/0f9dbdf8-7328-4273-bbfa-72391b9685b8-kube-api-access-xbtf2\") pod \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\" (UID: \"0f9dbdf8-7328-4273-bbfa-72391b9685b8\") " Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.964387 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-utilities" (OuterVolumeSpecName: "utilities") pod "0f9dbdf8-7328-4273-bbfa-72391b9685b8" (UID: "0f9dbdf8-7328-4273-bbfa-72391b9685b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.972208 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.974469 4922 scope.go:117] "RemoveContainer" containerID="7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9" Jan 26 15:48:18 crc kubenswrapper[4922]: I0126 15:48:18.977454 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9dbdf8-7328-4273-bbfa-72391b9685b8-kube-api-access-xbtf2" (OuterVolumeSpecName: "kube-api-access-xbtf2") pod "0f9dbdf8-7328-4273-bbfa-72391b9685b8" (UID: "0f9dbdf8-7328-4273-bbfa-72391b9685b8"). InnerVolumeSpecName "kube-api-access-xbtf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.076977 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtf2\" (UniqueName: \"kubernetes.io/projected/0f9dbdf8-7328-4273-bbfa-72391b9685b8-kube-api-access-xbtf2\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.077299 4922 scope.go:117] "RemoveContainer" containerID="a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.087252 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-d4kf8_2dc5ea59-1467-4fec-b933-e144ea4fda4a/manager/0.log" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.093379 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:48:19 crc kubenswrapper[4922]: E0126 15:48:19.093723 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.133802 4922 scope.go:117] "RemoveContainer" containerID="c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2" Jan 26 15:48:19 crc kubenswrapper[4922]: E0126 15:48:19.134440 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2\": container with ID starting with c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2 not found: ID does not exist" containerID="c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.134564 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2"} err="failed to get container status \"c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2\": rpc error: code = NotFound desc = could not find container \"c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2\": container with ID starting with c4cc9898762b11d064b29cedb24a8a79973904f690888aeea12508618cdfd6d2 not found: ID does not exist" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.134676 4922 scope.go:117] "RemoveContainer" containerID="7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9" Jan 26 15:48:19 crc kubenswrapper[4922]: E0126 15:48:19.135489 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9\": container with ID starting with 7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9 not found: ID does not exist" containerID="7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.135586 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9"} err="failed to get container status \"7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9\": rpc error: code = NotFound desc = could not find container \"7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9\": container with ID starting with 7c5bb471d8b197f5f854e6ccea2035ab4af92483868c37cacfafb2d092891fd9 not found: ID does not exist" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.135668 4922 scope.go:117] "RemoveContainer" containerID="a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee" Jan 26 15:48:19 crc kubenswrapper[4922]: E0126 15:48:19.136048 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee\": container with ID starting with a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee not found: ID does not exist" containerID="a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.136125 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee"} err="failed to get container status \"a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee\": rpc error: code = NotFound desc = could not find container \"a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee\": container with ID starting with a9f9b8257d4634c54316a5464b072b887b92cc69b06bf68a356e113c23a662ee not found: ID does not exist" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.149537 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f9dbdf8-7328-4273-bbfa-72391b9685b8" (UID: "0f9dbdf8-7328-4273-bbfa-72391b9685b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.178868 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9dbdf8-7328-4273-bbfa-72391b9685b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.221991 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-6pjpc_edd25ba7-355c-48aa-a7f5-0a60df9f1307/manager/0.log" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.285508 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvdfj"] Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.307349 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wvdfj"] Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.317449 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sthwh_98d7d86a-4bc1-4165-9dc5-3260b879df04/manager/0.log" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.489629 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9xxqk_3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e/manager/0.log" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.545171 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fr5t7_2203be8d-8aa1-4617-8297-c715783969a6/manager/0.log" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.709858 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-grh8g_eb93770e-722e-474d-93ef-5767d506fbf5/manager/0.log" Jan 26 15:48:19 crc kubenswrapper[4922]: I0126 15:48:19.944311 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-vbmsc_3e944bff-02ee-4d1d-948b-350795772f18/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.135908 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-xwz7c_ae2af37b-8945-48b3-8ed3-c2412b39c897/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.285359 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-s2hjs_03233631-2567-42a5-af70-861afeefbba3/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.310115 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-lvrq9_aa376169-3b34-4289-b339-14fc6f14a0e9/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.454301 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk_fa8912b6-c04f-4a1e-bb7a-8cae762f00ab/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.563399 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-n9bp2_f3aabab5-bdde-4359-b011-5887666ee21a/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.777187 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-vg2c5_3ed4f8f8-86dd-4331-b60d-ac713fe8be31/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.777471 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-npcs6_94e756c6-328c-4065-9d81-2cd1f5293a0a/manager/0.log" Jan 26 15:48:20 crc kubenswrapper[4922]: I0126 15:48:20.969631 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x_e741d752-bf89-4fc2-a173-98a5e6257ffc/manager/0.log" Jan 26 15:48:21 crc kubenswrapper[4922]: I0126 15:48:21.110812 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" path="/var/lib/kubelet/pods/0f9dbdf8-7328-4273-bbfa-72391b9685b8/volumes" Jan 26 15:48:21 crc kubenswrapper[4922]: I0126 15:48:21.140487 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-c7fd5fdf7-4dsfg_5857d460-cbd7-4dba-b280-e791678bc021/operator/0.log" Jan 26 15:48:21 crc kubenswrapper[4922]: I0126 15:48:21.299958 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v57b5_64ecb550-f2ab-4e01-88b7-e8059bd434ff/registry-server/0.log" Jan 26 15:48:21 crc kubenswrapper[4922]: I0126 15:48:21.614131 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-7w6r2_f3a5936d-5620-4b92-92ef-71b8387e019e/manager/0.log" Jan 26 15:48:21 crc kubenswrapper[4922]: I0126 15:48:21.769867 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-2mvbc_d487712f-146f-4342-a84e-6dca10b381fe/manager/0.log" Jan 26 15:48:21 crc kubenswrapper[4922]: I0126 15:48:21.963413 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fwdn9_f78795e3-4b41-43ec-b56d-37745dd146cd/operator/0.log" Jan 26 15:48:22 crc kubenswrapper[4922]: I0126 15:48:22.270462 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-h4zkz_9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143/manager/0.log" Jan 26 15:48:22 crc kubenswrapper[4922]: I0126 15:48:22.587431 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8tk4x_542bee92-421c-4969-9fb8-da684d74ab1d/manager/0.log" Jan 26 15:48:22 crc kubenswrapper[4922]: I0126 15:48:22.667613 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-wpq74_3732fc65-c182-42c3-9a98-b9aff1d49a1d/manager/0.log" Jan 26 15:48:22 crc kubenswrapper[4922]: I0126 15:48:22.902490 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b6496445-44795_2de91e12-3fbb-48e3-ac0f-55d98628405e/manager/0.log" Jan 26 15:48:23 crc kubenswrapper[4922]: I0126 15:48:23.021554 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75d4cf59bb-dctt2_033a8dae-299b-49cc-a63e-2d4bf250488c/manager/0.log" Jan 26 15:48:33 crc kubenswrapper[4922]: I0126 15:48:33.102879 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:48:33 crc kubenswrapper[4922]: E0126 15:48:33.103755 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:48:41 crc kubenswrapper[4922]: I0126 15:48:41.363471 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qhcrv_3529a429-628d-4c73-aaad-ee3719ea2022/control-plane-machine-set-operator/0.log" Jan 26 15:48:41 crc kubenswrapper[4922]: I0126 15:48:41.526406 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s5rq6_3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f/kube-rbac-proxy/0.log" Jan 26 15:48:41 crc kubenswrapper[4922]: I0126 15:48:41.590257 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s5rq6_3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f/machine-api-operator/0.log" Jan 26 15:48:46 crc kubenswrapper[4922]: I0126 15:48:46.093321 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:48:46 crc kubenswrapper[4922]: E0126 15:48:46.094261 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:48:54 crc kubenswrapper[4922]: I0126 15:48:54.047697 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vv74v_0ac6b35b-af7a-4913-985e-8d42d2f246f9/cert-manager-controller/0.log" Jan 26 15:48:54 crc kubenswrapper[4922]: I0126 15:48:54.125726 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vw7ht_90a11b15-590d-43f0-957a-67389e3cd75b/cert-manager-cainjector/0.log" Jan 26 15:48:54 crc kubenswrapper[4922]: I0126 15:48:54.185471 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-hdzlp_0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124/cert-manager-webhook/0.log" Jan 26 15:49:01 crc kubenswrapper[4922]: I0126 15:49:01.093381 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:49:01 crc kubenswrapper[4922]: E0126 15:49:01.094272 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:49:06 crc kubenswrapper[4922]: I0126 15:49:06.104900 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-rhptb_fedbbbac-c62a-46aa-adfd-4bed0c5282fc/nmstate-console-plugin/0.log" Jan 26 15:49:06 crc kubenswrapper[4922]: I0126 15:49:06.317981 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c6w6t_594847ad-6266-4357-a47a-aa6383207517/nmstate-handler/0.log" Jan 26 15:49:06 crc kubenswrapper[4922]: I0126 15:49:06.331408 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-99g5t_ca3a7e5f-211d-40ef-bfb8-261b1af52cda/kube-rbac-proxy/0.log" Jan 26 15:49:06 crc kubenswrapper[4922]: I0126 15:49:06.517916 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-99g5t_ca3a7e5f-211d-40ef-bfb8-261b1af52cda/nmstate-metrics/0.log" Jan 26 15:49:06 crc kubenswrapper[4922]: I0126 15:49:06.586403 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7mfk9_95cb5278-d3ed-40e1-8d00-6dd6acbedd3d/nmstate-operator/0.log" Jan 26 15:49:06 crc kubenswrapper[4922]: I0126 15:49:06.702095 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-w7dvs_e02060f5-4687-4f14-9e1a-d94d855d5563/nmstate-webhook/0.log" Jan 26 15:49:16 crc kubenswrapper[4922]: I0126 15:49:16.093220 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:49:16 crc kubenswrapper[4922]: E0126 15:49:16.094160 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:49:20 crc kubenswrapper[4922]: I0126 15:49:20.143404 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xgf5b_3249a43d-d843-43c3-b922-be437eabb548/prometheus-operator/0.log" Jan 26 15:49:20 crc kubenswrapper[4922]: I0126 15:49:20.622215 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9_7289018a-d6ca-4075-b586-e180be982247/prometheus-operator-admission-webhook/0.log" Jan 26 15:49:20 crc kubenswrapper[4922]: I0126 15:49:20.627942 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv_e30b09af-aae4-4f17-ab60-25f6f3dca352/prometheus-operator-admission-webhook/0.log" Jan 26 15:49:20 crc kubenswrapper[4922]: I0126 15:49:20.806257 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mct2h_157e0710-b880-4501-99ad-864b2f70cef5/operator/0.log" Jan 26 15:49:20 crc kubenswrapper[4922]: I0126 15:49:20.864762 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7gzrt_5b04fb53-39bc-4552-b7af-39e57a4102df/perses-operator/0.log" Jan 26 15:49:29 crc kubenswrapper[4922]: I0126 15:49:29.092664 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:49:29 crc kubenswrapper[4922]: E0126 15:49:29.093560 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.065407 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkg52_bac401ac-4b21-403d-a9e0-808c69a6e0e6/kube-rbac-proxy/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.253372 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkg52_bac401ac-4b21-403d-a9e0-808c69a6e0e6/controller/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.434184 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.654180 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.664653 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.690160 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.697416 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:49:35 crc kubenswrapper[4922]: I0126 15:49:35.960767 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.276135 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.277665 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.279325 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.465281 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.522720 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.541998 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.578331 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/controller/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.744255 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/frr-metrics/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.784418 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/kube-rbac-proxy/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.796589 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/kube-rbac-proxy-frr/0.log" Jan 26 15:49:36 crc kubenswrapper[4922]: I0126 15:49:36.985810 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/reloader/0.log" Jan 26 15:49:37 crc kubenswrapper[4922]: I0126 15:49:37.064447 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-gl477_c35b1371-974a-4a0f-b8a4-d7bf024090aa/frr-k8s-webhook-server/0.log" Jan 26 15:49:37 crc kubenswrapper[4922]: I0126 15:49:37.308650 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c599ccf7c-52gjl_74e0af8f-5a55-4376-912d-095cd5078f93/manager/0.log" Jan 26 15:49:37 crc kubenswrapper[4922]: I0126 15:49:37.619647 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc78bf7bd-42g89_b7938fe0-7f27-49c6-959d-62405a4847f1/webhook-server/0.log" Jan 26 15:49:37 crc kubenswrapper[4922]: I0126 15:49:37.657756 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q7phl_2a55e4e3-80c5-4e46-8916-5a306903ce70/kube-rbac-proxy/0.log" Jan 26 15:49:38 crc kubenswrapper[4922]: I0126 15:49:38.385716 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q7phl_2a55e4e3-80c5-4e46-8916-5a306903ce70/speaker/0.log" Jan 26 15:49:38 crc kubenswrapper[4922]: I0126 15:49:38.654962 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/frr/0.log" Jan 26 15:49:42 crc kubenswrapper[4922]: I0126 15:49:42.093183 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:49:42 crc kubenswrapper[4922]: E0126 15:49:42.093948 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:49:51 crc kubenswrapper[4922]: I0126 15:49:51.817539 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/util/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.011869 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/util/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.027243 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/pull/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.053249 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/pull/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.420791 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/util/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.456854 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/extract/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.508495 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/pull/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.618492 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/util/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.856739 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/pull/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.860370 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/pull/0.log" Jan 26 15:49:52 crc kubenswrapper[4922]: I0126 15:49:52.882603 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/util/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.062846 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/util/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.069554 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/extract/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.109391 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/pull/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.263261 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/util/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.423509 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/pull/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.467876 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/pull/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.480283 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/util/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.677903 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/pull/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.679044 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/extract/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.711832 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/util/0.log" Jan 26 15:49:53 crc kubenswrapper[4922]: I0126 15:49:53.852774 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-utilities/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.003999 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-content/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.031537 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-content/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.054992 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-utilities/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.228329 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-utilities/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.273165 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-content/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.495156 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-utilities/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.598127 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/registry-server/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.630494 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-utilities/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.676772 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-content/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.726801 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-content/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.896442 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-utilities/0.log" Jan 26 15:49:54 crc kubenswrapper[4922]: I0126 15:49:54.946047 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-content/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.209918 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tjq29_a88a2014-3fba-45e3-bc74-1b2c803c10b5/marketplace-operator/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.253181 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-utilities/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.491722 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/registry-server/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.517764 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-content/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.585963 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-utilities/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.594416 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-content/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.810130 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-utilities/0.log" Jan 26 15:49:55 crc kubenswrapper[4922]: I0126 15:49:55.852215 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-content/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.049229 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/registry-server/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.073054 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-utilities/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.245291 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-content/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.245456 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-content/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.266438 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-utilities/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.424590 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-utilities/0.log" Jan 26 15:49:56 crc kubenswrapper[4922]: I0126 15:49:56.471346 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-content/0.log" Jan 26 15:49:57 crc kubenswrapper[4922]: I0126 15:49:57.092274 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:49:57 crc kubenswrapper[4922]: E0126 15:49:57.092787 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:49:57 crc kubenswrapper[4922]: I0126 15:49:57.292004 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/registry-server/0.log" Jan 26 15:50:09 crc kubenswrapper[4922]: I0126 15:50:09.784179 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9_7289018a-d6ca-4075-b586-e180be982247/prometheus-operator-admission-webhook/0.log" Jan 26 15:50:09 crc kubenswrapper[4922]: I0126 15:50:09.807058 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xgf5b_3249a43d-d843-43c3-b922-be437eabb548/prometheus-operator/0.log" Jan 26 15:50:09 crc kubenswrapper[4922]: I0126 15:50:09.833412 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv_e30b09af-aae4-4f17-ab60-25f6f3dca352/prometheus-operator-admission-webhook/0.log" Jan 26 15:50:09 crc kubenswrapper[4922]: I0126 15:50:09.982888 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mct2h_157e0710-b880-4501-99ad-864b2f70cef5/operator/0.log" Jan 26 15:50:09 crc kubenswrapper[4922]: I0126 15:50:09.990203 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7gzrt_5b04fb53-39bc-4552-b7af-39e57a4102df/perses-operator/0.log" Jan 26 15:50:10 crc kubenswrapper[4922]: I0126 15:50:10.092777 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:50:10 crc kubenswrapper[4922]: E0126 15:50:10.093208 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:50:24 crc kubenswrapper[4922]: I0126 15:50:24.092807 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:50:25 crc kubenswrapper[4922]: I0126 15:50:25.132087 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"2c93d260dd83ee8bf43edcebb93f7b8fbe296ce3987f57568563eeb35729ead9"} Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.674282 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjnwh"] Jan 26 15:50:51 crc kubenswrapper[4922]: E0126 15:50:51.676906 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="extract-utilities" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.676987 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="extract-utilities" Jan 26 15:50:51 crc kubenswrapper[4922]: E0126 15:50:51.677013 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="registry-server" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.677023 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="registry-server" Jan 26 15:50:51 crc kubenswrapper[4922]: E0126 15:50:51.677048 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="extract-content" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.677057 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="extract-content" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.677411 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9dbdf8-7328-4273-bbfa-72391b9685b8" containerName="registry-server" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.679224 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.694674 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjnwh"] Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.812602 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-utilities\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.812662 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6247\" (UniqueName: \"kubernetes.io/projected/67ef6cbc-68dc-446c-9070-583119e460ec-kube-api-access-b6247\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.812882 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-catalog-content\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.914769 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6247\" (UniqueName: \"kubernetes.io/projected/67ef6cbc-68dc-446c-9070-583119e460ec-kube-api-access-b6247\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.914948 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-catalog-content\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.915018 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-utilities\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.915613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-utilities\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.915653 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-catalog-content\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:51 crc kubenswrapper[4922]: I0126 15:50:51.944262 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6247\" (UniqueName: \"kubernetes.io/projected/67ef6cbc-68dc-446c-9070-583119e460ec-kube-api-access-b6247\") pod \"redhat-marketplace-vjnwh\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:52 crc kubenswrapper[4922]: I0126 15:50:52.060029 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:50:52 crc kubenswrapper[4922]: I0126 15:50:52.797856 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjnwh"] Jan 26 15:50:53 crc kubenswrapper[4922]: I0126 15:50:53.432834 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerStarted","Data":"a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d"} Jan 26 15:50:53 crc kubenswrapper[4922]: I0126 15:50:53.432901 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerStarted","Data":"6199ed14b552d4f72683cc6b1e890130e1826b088fc9e656aa3b8031d6c00605"} Jan 26 15:50:54 crc kubenswrapper[4922]: I0126 15:50:54.475674 4922 generic.go:334] "Generic (PLEG): container finished" podID="67ef6cbc-68dc-446c-9070-583119e460ec" containerID="a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d" exitCode=0 Jan 26 15:50:54 crc kubenswrapper[4922]: I0126 15:50:54.475751 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerDied","Data":"a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d"} Jan 26 15:50:54 crc kubenswrapper[4922]: I0126 15:50:54.481404 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:50:56 crc kubenswrapper[4922]: I0126 15:50:56.511197 4922 generic.go:334] "Generic (PLEG): container finished" podID="67ef6cbc-68dc-446c-9070-583119e460ec" containerID="6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad" exitCode=0 Jan 26 15:50:56 crc kubenswrapper[4922]: I0126 15:50:56.511425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerDied","Data":"6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad"} Jan 26 15:50:57 crc kubenswrapper[4922]: I0126 15:50:57.522196 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerStarted","Data":"dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29"} Jan 26 15:50:57 crc kubenswrapper[4922]: I0126 15:50:57.544188 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjnwh" podStartSLOduration=4.053881242 podStartE2EDuration="6.544164864s" podCreationTimestamp="2026-01-26 15:50:51 +0000 UTC" firstStartedPulling="2026-01-26 15:50:54.480834594 +0000 UTC m=+6071.683097366" lastFinishedPulling="2026-01-26 15:50:56.971118216 +0000 UTC m=+6074.173380988" observedRunningTime="2026-01-26 15:50:57.541502712 +0000 UTC m=+6074.743765504" watchObservedRunningTime="2026-01-26 15:50:57.544164864 +0000 UTC m=+6074.746427656" Jan 26 15:51:02 crc kubenswrapper[4922]: I0126 15:51:02.060305 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:51:02 crc kubenswrapper[4922]: I0126 15:51:02.060659 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:51:02 crc kubenswrapper[4922]: I0126 15:51:02.127459 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:51:02 crc kubenswrapper[4922]: I0126 15:51:02.625514 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:51:02 crc kubenswrapper[4922]: I0126 15:51:02.682938 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjnwh"] Jan 26 15:51:04 crc kubenswrapper[4922]: I0126 15:51:04.610382 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjnwh" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="registry-server" containerID="cri-o://dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29" gracePeriod=2 Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.129358 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.323508 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6247\" (UniqueName: \"kubernetes.io/projected/67ef6cbc-68dc-446c-9070-583119e460ec-kube-api-access-b6247\") pod \"67ef6cbc-68dc-446c-9070-583119e460ec\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.323659 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-catalog-content\") pod \"67ef6cbc-68dc-446c-9070-583119e460ec\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.323994 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-utilities\") pod \"67ef6cbc-68dc-446c-9070-583119e460ec\" (UID: \"67ef6cbc-68dc-446c-9070-583119e460ec\") " Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.325099 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-utilities" (OuterVolumeSpecName: "utilities") pod "67ef6cbc-68dc-446c-9070-583119e460ec" (UID: "67ef6cbc-68dc-446c-9070-583119e460ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.330150 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ef6cbc-68dc-446c-9070-583119e460ec-kube-api-access-b6247" (OuterVolumeSpecName: "kube-api-access-b6247") pod "67ef6cbc-68dc-446c-9070-583119e460ec" (UID: "67ef6cbc-68dc-446c-9070-583119e460ec"). InnerVolumeSpecName "kube-api-access-b6247". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.353364 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ef6cbc-68dc-446c-9070-583119e460ec" (UID: "67ef6cbc-68dc-446c-9070-583119e460ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.427161 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.427212 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6247\" (UniqueName: \"kubernetes.io/projected/67ef6cbc-68dc-446c-9070-583119e460ec-kube-api-access-b6247\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.427231 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef6cbc-68dc-446c-9070-583119e460ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.622482 4922 generic.go:334] "Generic (PLEG): container finished" podID="67ef6cbc-68dc-446c-9070-583119e460ec" containerID="dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29" exitCode=0 Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.622532 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerDied","Data":"dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29"} Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.622561 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjnwh" event={"ID":"67ef6cbc-68dc-446c-9070-583119e460ec","Type":"ContainerDied","Data":"6199ed14b552d4f72683cc6b1e890130e1826b088fc9e656aa3b8031d6c00605"} Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.622581 4922 scope.go:117] "RemoveContainer" containerID="dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.622684 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjnwh" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.659019 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjnwh"] Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.662466 4922 scope.go:117] "RemoveContainer" containerID="6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.671131 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjnwh"] Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.691418 4922 scope.go:117] "RemoveContainer" containerID="a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.733148 4922 scope.go:117] "RemoveContainer" containerID="dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29" Jan 26 15:51:05 crc kubenswrapper[4922]: E0126 15:51:05.733576 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29\": container with ID starting with dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29 not found: ID does not exist" containerID="dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.733606 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29"} err="failed to get container status \"dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29\": rpc error: code = NotFound desc = could not find container \"dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29\": container with ID starting with dae2e48bb9787507c089d5e5932c3d12b17255b57000ae146b01a4c9bd9c2f29 not found: ID does not exist" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.733630 4922 scope.go:117] "RemoveContainer" containerID="6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad" Jan 26 15:51:05 crc kubenswrapper[4922]: E0126 15:51:05.733932 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad\": container with ID starting with 6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad not found: ID does not exist" containerID="6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.734190 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad"} err="failed to get container status \"6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad\": rpc error: code = NotFound desc = could not find container \"6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad\": container with ID starting with 6d0817f6c1d739a8405926ae7f8d3028d6541452296306587fc307ad79c344ad not found: ID does not exist" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.734218 4922 scope.go:117] "RemoveContainer" containerID="a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d" Jan 26 15:51:05 crc kubenswrapper[4922]: E0126 15:51:05.734834 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d\": container with ID starting with a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d not found: ID does not exist" containerID="a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d" Jan 26 15:51:05 crc kubenswrapper[4922]: I0126 15:51:05.734867 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d"} err="failed to get container status \"a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d\": rpc error: code = NotFound desc = could not find container \"a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d\": container with ID starting with a5506c6848962301c356ff371200c634988c4d61c6ae363f243e3ac6482e637d not found: ID does not exist" Jan 26 15:51:07 crc kubenswrapper[4922]: I0126 15:51:07.103934 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" path="/var/lib/kubelet/pods/67ef6cbc-68dc-446c-9070-583119e460ec/volumes" Jan 26 15:52:20 crc kubenswrapper[4922]: I0126 15:52:20.405023 4922 generic.go:334] "Generic (PLEG): container finished" podID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerID="dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1" exitCode=0 Jan 26 15:52:20 crc kubenswrapper[4922]: I0126 15:52:20.405124 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkzlp/must-gather-j446m" event={"ID":"664779e3-dd2a-4087-9a93-c964c0c2d869","Type":"ContainerDied","Data":"dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1"} Jan 26 15:52:20 crc kubenswrapper[4922]: I0126 15:52:20.406323 4922 scope.go:117] "RemoveContainer" containerID="dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1" Jan 26 15:52:20 crc kubenswrapper[4922]: I0126 15:52:20.641375 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zkzlp_must-gather-j446m_664779e3-dd2a-4087-9a93-c964c0c2d869/gather/0.log" Jan 26 15:52:21 crc kubenswrapper[4922]: I0126 15:52:21.741542 4922 scope.go:117] "RemoveContainer" containerID="b1031a6cb1f4c1289497f235d1a4f8ebd3e8b6f929c00b7da69970c5cb8ed7eb" Jan 26 15:52:29 crc kubenswrapper[4922]: I0126 15:52:29.826093 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkzlp/must-gather-j446m"] Jan 26 15:52:29 crc kubenswrapper[4922]: I0126 15:52:29.827463 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zkzlp/must-gather-j446m" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="copy" containerID="cri-o://a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a" gracePeriod=2 Jan 26 15:52:29 crc kubenswrapper[4922]: I0126 15:52:29.837478 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkzlp/must-gather-j446m"] Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.268030 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zkzlp_must-gather-j446m_664779e3-dd2a-4087-9a93-c964c0c2d869/copy/0.log" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.269865 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.379928 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fchg\" (UniqueName: \"kubernetes.io/projected/664779e3-dd2a-4087-9a93-c964c0c2d869-kube-api-access-8fchg\") pod \"664779e3-dd2a-4087-9a93-c964c0c2d869\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.380566 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/664779e3-dd2a-4087-9a93-c964c0c2d869-must-gather-output\") pod \"664779e3-dd2a-4087-9a93-c964c0c2d869\" (UID: \"664779e3-dd2a-4087-9a93-c964c0c2d869\") " Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.392609 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664779e3-dd2a-4087-9a93-c964c0c2d869-kube-api-access-8fchg" (OuterVolumeSpecName: "kube-api-access-8fchg") pod "664779e3-dd2a-4087-9a93-c964c0c2d869" (UID: "664779e3-dd2a-4087-9a93-c964c0c2d869"). InnerVolumeSpecName "kube-api-access-8fchg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.482973 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fchg\" (UniqueName: \"kubernetes.io/projected/664779e3-dd2a-4087-9a93-c964c0c2d869-kube-api-access-8fchg\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.526976 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zkzlp_must-gather-j446m_664779e3-dd2a-4087-9a93-c964c0c2d869/copy/0.log" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.527391 4922 generic.go:334] "Generic (PLEG): container finished" podID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerID="a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a" exitCode=143 Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.527466 4922 scope.go:117] "RemoveContainer" containerID="a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.527469 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkzlp/must-gather-j446m" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.553392 4922 scope.go:117] "RemoveContainer" containerID="dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.602913 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664779e3-dd2a-4087-9a93-c964c0c2d869-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "664779e3-dd2a-4087-9a93-c964c0c2d869" (UID: "664779e3-dd2a-4087-9a93-c964c0c2d869"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.688132 4922 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/664779e3-dd2a-4087-9a93-c964c0c2d869-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.716003 4922 scope.go:117] "RemoveContainer" containerID="a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a" Jan 26 15:52:30 crc kubenswrapper[4922]: E0126 15:52:30.719965 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a\": container with ID starting with a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a not found: ID does not exist" containerID="a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.720024 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a"} err="failed to get container status \"a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a\": rpc error: code = NotFound desc = could not find container \"a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a\": container with ID starting with a707a94a2efdb455003c1933272fb603428608604257df2ab87fee88968fee1a not found: ID does not exist" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.720056 4922 scope.go:117] "RemoveContainer" containerID="dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1" Jan 26 15:52:30 crc kubenswrapper[4922]: E0126 15:52:30.720824 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1\": container with ID starting with dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1 not found: ID does not exist" containerID="dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1" Jan 26 15:52:30 crc kubenswrapper[4922]: I0126 15:52:30.720896 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1"} err="failed to get container status \"dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1\": rpc error: code = NotFound desc = could not find container \"dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1\": container with ID starting with dd12598878360b0cce5c32ee0ace12eb8eeb7f8fe2f3a0b262f96f69dbfac0a1 not found: ID does not exist" Jan 26 15:52:31 crc kubenswrapper[4922]: I0126 15:52:31.106874 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" path="/var/lib/kubelet/pods/664779e3-dd2a-4087-9a93-c964c0c2d869/volumes" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.568388 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n2wnn"] Jan 26 15:52:39 crc kubenswrapper[4922]: E0126 15:52:39.569465 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="gather" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569480 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="gather" Jan 26 15:52:39 crc kubenswrapper[4922]: E0126 15:52:39.569492 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="copy" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569497 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="copy" Jan 26 15:52:39 crc kubenswrapper[4922]: E0126 15:52:39.569509 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="extract-utilities" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569516 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="extract-utilities" Jan 26 15:52:39 crc kubenswrapper[4922]: E0126 15:52:39.569535 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="extract-content" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569541 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="extract-content" Jan 26 15:52:39 crc kubenswrapper[4922]: E0126 15:52:39.569557 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="registry-server" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569563 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="registry-server" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569747 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="copy" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569763 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ef6cbc-68dc-446c-9070-583119e460ec" containerName="registry-server" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.569775 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="664779e3-dd2a-4087-9a93-c964c0c2d869" containerName="gather" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.571706 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.594649 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2wnn"] Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.674564 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-catalog-content\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.674640 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cx9v\" (UniqueName: \"kubernetes.io/projected/53593bc6-3749-431d-8922-07d83e5e77a9-kube-api-access-8cx9v\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.674862 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-utilities\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.776754 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-catalog-content\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.776814 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cx9v\" (UniqueName: \"kubernetes.io/projected/53593bc6-3749-431d-8922-07d83e5e77a9-kube-api-access-8cx9v\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.776860 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-utilities\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.777448 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-catalog-content\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.777481 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-utilities\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.806983 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cx9v\" (UniqueName: \"kubernetes.io/projected/53593bc6-3749-431d-8922-07d83e5e77a9-kube-api-access-8cx9v\") pod \"community-operators-n2wnn\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:39 crc kubenswrapper[4922]: I0126 15:52:39.900813 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:40 crc kubenswrapper[4922]: I0126 15:52:40.457737 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2wnn"] Jan 26 15:52:40 crc kubenswrapper[4922]: I0126 15:52:40.620364 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerStarted","Data":"554dbe5939d2d30c5f4ba918cafb89e08eec795e436f0cfd521b4d5bcfb4f21f"} Jan 26 15:52:41 crc kubenswrapper[4922]: I0126 15:52:41.306620 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:52:41 crc kubenswrapper[4922]: I0126 15:52:41.306684 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:52:41 crc kubenswrapper[4922]: I0126 15:52:41.629554 4922 generic.go:334] "Generic (PLEG): container finished" podID="53593bc6-3749-431d-8922-07d83e5e77a9" containerID="1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771" exitCode=0 Jan 26 15:52:41 crc kubenswrapper[4922]: I0126 15:52:41.629656 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerDied","Data":"1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771"} Jan 26 15:52:42 crc kubenswrapper[4922]: I0126 15:52:42.645174 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerStarted","Data":"99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888"} Jan 26 15:52:43 crc kubenswrapper[4922]: I0126 15:52:43.658121 4922 generic.go:334] "Generic (PLEG): container finished" podID="53593bc6-3749-431d-8922-07d83e5e77a9" containerID="99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888" exitCode=0 Jan 26 15:52:43 crc kubenswrapper[4922]: I0126 15:52:43.658228 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerDied","Data":"99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888"} Jan 26 15:52:44 crc kubenswrapper[4922]: I0126 15:52:44.676777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerStarted","Data":"9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e"} Jan 26 15:52:44 crc kubenswrapper[4922]: I0126 15:52:44.703856 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n2wnn" podStartSLOduration=3.146656434 podStartE2EDuration="5.703831287s" podCreationTimestamp="2026-01-26 15:52:39 +0000 UTC" firstStartedPulling="2026-01-26 15:52:41.631646077 +0000 UTC m=+6178.833908849" lastFinishedPulling="2026-01-26 15:52:44.18882093 +0000 UTC m=+6181.391083702" observedRunningTime="2026-01-26 15:52:44.696431376 +0000 UTC m=+6181.898694148" watchObservedRunningTime="2026-01-26 15:52:44.703831287 +0000 UTC m=+6181.906094069" Jan 26 15:52:49 crc kubenswrapper[4922]: I0126 15:52:49.902531 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:49 crc kubenswrapper[4922]: I0126 15:52:49.903831 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:49 crc kubenswrapper[4922]: I0126 15:52:49.966249 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:50 crc kubenswrapper[4922]: I0126 15:52:50.779438 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:50 crc kubenswrapper[4922]: I0126 15:52:50.825877 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2wnn"] Jan 26 15:52:52 crc kubenswrapper[4922]: I0126 15:52:52.768783 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n2wnn" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="registry-server" containerID="cri-o://9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e" gracePeriod=2 Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.451342 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.505525 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cx9v\" (UniqueName: \"kubernetes.io/projected/53593bc6-3749-431d-8922-07d83e5e77a9-kube-api-access-8cx9v\") pod \"53593bc6-3749-431d-8922-07d83e5e77a9\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.505717 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-utilities\") pod \"53593bc6-3749-431d-8922-07d83e5e77a9\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.505749 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-catalog-content\") pod \"53593bc6-3749-431d-8922-07d83e5e77a9\" (UID: \"53593bc6-3749-431d-8922-07d83e5e77a9\") " Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.506735 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-utilities" (OuterVolumeSpecName: "utilities") pod "53593bc6-3749-431d-8922-07d83e5e77a9" (UID: "53593bc6-3749-431d-8922-07d83e5e77a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.512296 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53593bc6-3749-431d-8922-07d83e5e77a9-kube-api-access-8cx9v" (OuterVolumeSpecName: "kube-api-access-8cx9v") pod "53593bc6-3749-431d-8922-07d83e5e77a9" (UID: "53593bc6-3749-431d-8922-07d83e5e77a9"). InnerVolumeSpecName "kube-api-access-8cx9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.574585 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53593bc6-3749-431d-8922-07d83e5e77a9" (UID: "53593bc6-3749-431d-8922-07d83e5e77a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.608056 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cx9v\" (UniqueName: \"kubernetes.io/projected/53593bc6-3749-431d-8922-07d83e5e77a9-kube-api-access-8cx9v\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.608159 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.608193 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53593bc6-3749-431d-8922-07d83e5e77a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.781575 4922 generic.go:334] "Generic (PLEG): container finished" podID="53593bc6-3749-431d-8922-07d83e5e77a9" containerID="9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e" exitCode=0 Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.781663 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerDied","Data":"9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e"} Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.781691 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2wnn" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.781722 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2wnn" event={"ID":"53593bc6-3749-431d-8922-07d83e5e77a9","Type":"ContainerDied","Data":"554dbe5939d2d30c5f4ba918cafb89e08eec795e436f0cfd521b4d5bcfb4f21f"} Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.781789 4922 scope.go:117] "RemoveContainer" containerID="9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.804699 4922 scope.go:117] "RemoveContainer" containerID="99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.826365 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2wnn"] Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.838921 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n2wnn"] Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.840745 4922 scope.go:117] "RemoveContainer" containerID="1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.901239 4922 scope.go:117] "RemoveContainer" containerID="9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e" Jan 26 15:52:53 crc kubenswrapper[4922]: E0126 15:52:53.901854 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e\": container with ID starting with 9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e not found: ID does not exist" containerID="9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.901896 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e"} err="failed to get container status \"9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e\": rpc error: code = NotFound desc = could not find container \"9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e\": container with ID starting with 9a87f2b41e79442970e92cfb34cace4e06529cf5440ed3538f81ee03c2243e8e not found: ID does not exist" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.901931 4922 scope.go:117] "RemoveContainer" containerID="99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888" Jan 26 15:52:53 crc kubenswrapper[4922]: E0126 15:52:53.902413 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888\": container with ID starting with 99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888 not found: ID does not exist" containerID="99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.902452 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888"} err="failed to get container status \"99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888\": rpc error: code = NotFound desc = could not find container \"99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888\": container with ID starting with 99b80400726b0eea55bccaea5bd4e02d66225deb9405fbfd89b3ba64efc2a888 not found: ID does not exist" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.902478 4922 scope.go:117] "RemoveContainer" containerID="1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771" Jan 26 15:52:53 crc kubenswrapper[4922]: E0126 15:52:53.902821 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771\": container with ID starting with 1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771 not found: ID does not exist" containerID="1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771" Jan 26 15:52:53 crc kubenswrapper[4922]: I0126 15:52:53.902856 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771"} err="failed to get container status \"1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771\": rpc error: code = NotFound desc = could not find container \"1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771\": container with ID starting with 1d7bdde06eecd3d2e2d7354373fe02de0992ff6427fbdf0c66b39ffd16171771 not found: ID does not exist" Jan 26 15:52:55 crc kubenswrapper[4922]: I0126 15:52:55.102487 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" path="/var/lib/kubelet/pods/53593bc6-3749-431d-8922-07d83e5e77a9/volumes" Jan 26 15:53:11 crc kubenswrapper[4922]: I0126 15:53:11.306591 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:53:11 crc kubenswrapper[4922]: I0126 15:53:11.307258 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:53:21 crc kubenswrapper[4922]: I0126 15:53:21.832792 4922 scope.go:117] "RemoveContainer" containerID="8893047a1c4f63814c20a12363787686beca44f394e7aa3e9d9faffa3498a17c" Jan 26 15:53:41 crc kubenswrapper[4922]: I0126 15:53:41.306488 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:53:41 crc kubenswrapper[4922]: I0126 15:53:41.307031 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:53:41 crc kubenswrapper[4922]: I0126 15:53:41.307104 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:53:41 crc kubenswrapper[4922]: I0126 15:53:41.308058 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c93d260dd83ee8bf43edcebb93f7b8fbe296ce3987f57568563eeb35729ead9"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:53:41 crc kubenswrapper[4922]: I0126 15:53:41.308173 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://2c93d260dd83ee8bf43edcebb93f7b8fbe296ce3987f57568563eeb35729ead9" gracePeriod=600 Jan 26 15:53:42 crc kubenswrapper[4922]: I0126 15:53:42.255600 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="2c93d260dd83ee8bf43edcebb93f7b8fbe296ce3987f57568563eeb35729ead9" exitCode=0 Jan 26 15:53:42 crc kubenswrapper[4922]: I0126 15:53:42.255679 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"2c93d260dd83ee8bf43edcebb93f7b8fbe296ce3987f57568563eeb35729ead9"} Jan 26 15:53:42 crc kubenswrapper[4922]: I0126 15:53:42.255911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f"} Jan 26 15:53:42 crc kubenswrapper[4922]: I0126 15:53:42.255940 4922 scope.go:117] "RemoveContainer" containerID="36d6a483c3789bbb0e12003c27fbf8ec84da01b81c7dc4bbde877631af419353" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.261214 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ggzpb"] Jan 26 15:54:12 crc kubenswrapper[4922]: E0126 15:54:12.262730 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="extract-content" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.262747 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="extract-content" Jan 26 15:54:12 crc kubenswrapper[4922]: E0126 15:54:12.262774 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="registry-server" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.262781 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="registry-server" Jan 26 15:54:12 crc kubenswrapper[4922]: E0126 15:54:12.262796 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="extract-utilities" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.262802 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="extract-utilities" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.262984 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="53593bc6-3749-431d-8922-07d83e5e77a9" containerName="registry-server" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.264694 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.291642 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjbk\" (UniqueName: \"kubernetes.io/projected/585053d3-2abe-4ade-80fa-ab629a7209bb-kube-api-access-tdjbk\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.291705 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-utilities\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.291790 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-catalog-content\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.294227 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggzpb"] Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.394023 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-catalog-content\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.394206 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjbk\" (UniqueName: \"kubernetes.io/projected/585053d3-2abe-4ade-80fa-ab629a7209bb-kube-api-access-tdjbk\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.394272 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-utilities\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.394652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-catalog-content\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.394810 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-utilities\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.431588 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjbk\" (UniqueName: \"kubernetes.io/projected/585053d3-2abe-4ade-80fa-ab629a7209bb-kube-api-access-tdjbk\") pod \"certified-operators-ggzpb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:12 crc kubenswrapper[4922]: I0126 15:54:12.589714 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:13 crc kubenswrapper[4922]: I0126 15:54:13.149348 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggzpb"] Jan 26 15:54:13 crc kubenswrapper[4922]: I0126 15:54:13.599311 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerStarted","Data":"4b28c36bf60bff39814fbfc0c2585fc671edfb059d39d8e138f1992e21bbab3b"} Jan 26 15:54:14 crc kubenswrapper[4922]: I0126 15:54:14.611629 4922 generic.go:334] "Generic (PLEG): container finished" podID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerID="79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450" exitCode=0 Jan 26 15:54:14 crc kubenswrapper[4922]: I0126 15:54:14.611683 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerDied","Data":"79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450"} Jan 26 15:54:15 crc kubenswrapper[4922]: I0126 15:54:15.623322 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerStarted","Data":"32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d"} Jan 26 15:54:16 crc kubenswrapper[4922]: I0126 15:54:16.636300 4922 generic.go:334] "Generic (PLEG): container finished" podID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerID="32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d" exitCode=0 Jan 26 15:54:16 crc kubenswrapper[4922]: I0126 15:54:16.636557 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerDied","Data":"32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d"} Jan 26 15:54:17 crc kubenswrapper[4922]: I0126 15:54:17.652968 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerStarted","Data":"b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3"} Jan 26 15:54:17 crc kubenswrapper[4922]: I0126 15:54:17.672699 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ggzpb" podStartSLOduration=3.164469489 podStartE2EDuration="5.672667538s" podCreationTimestamp="2026-01-26 15:54:12 +0000 UTC" firstStartedPulling="2026-01-26 15:54:14.615553548 +0000 UTC m=+6271.817816320" lastFinishedPulling="2026-01-26 15:54:17.123751597 +0000 UTC m=+6274.326014369" observedRunningTime="2026-01-26 15:54:17.669190282 +0000 UTC m=+6274.871453064" watchObservedRunningTime="2026-01-26 15:54:17.672667538 +0000 UTC m=+6274.874930310" Jan 26 15:54:22 crc kubenswrapper[4922]: I0126 15:54:22.590273 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:22 crc kubenswrapper[4922]: I0126 15:54:22.590750 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:22 crc kubenswrapper[4922]: I0126 15:54:22.655476 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:22 crc kubenswrapper[4922]: I0126 15:54:22.773623 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:22 crc kubenswrapper[4922]: I0126 15:54:22.898057 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggzpb"] Jan 26 15:54:24 crc kubenswrapper[4922]: I0126 15:54:24.729874 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ggzpb" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="registry-server" containerID="cri-o://b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3" gracePeriod=2 Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.196349 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.288812 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-catalog-content\") pod \"585053d3-2abe-4ade-80fa-ab629a7209bb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.289037 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdjbk\" (UniqueName: \"kubernetes.io/projected/585053d3-2abe-4ade-80fa-ab629a7209bb-kube-api-access-tdjbk\") pod \"585053d3-2abe-4ade-80fa-ab629a7209bb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.289269 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-utilities\") pod \"585053d3-2abe-4ade-80fa-ab629a7209bb\" (UID: \"585053d3-2abe-4ade-80fa-ab629a7209bb\") " Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.290753 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-utilities" (OuterVolumeSpecName: "utilities") pod "585053d3-2abe-4ade-80fa-ab629a7209bb" (UID: "585053d3-2abe-4ade-80fa-ab629a7209bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.296411 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585053d3-2abe-4ade-80fa-ab629a7209bb-kube-api-access-tdjbk" (OuterVolumeSpecName: "kube-api-access-tdjbk") pod "585053d3-2abe-4ade-80fa-ab629a7209bb" (UID: "585053d3-2abe-4ade-80fa-ab629a7209bb"). InnerVolumeSpecName "kube-api-access-tdjbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.350532 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "585053d3-2abe-4ade-80fa-ab629a7209bb" (UID: "585053d3-2abe-4ade-80fa-ab629a7209bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.392417 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.392453 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585053d3-2abe-4ade-80fa-ab629a7209bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.392464 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdjbk\" (UniqueName: \"kubernetes.io/projected/585053d3-2abe-4ade-80fa-ab629a7209bb-kube-api-access-tdjbk\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.741819 4922 generic.go:334] "Generic (PLEG): container finished" podID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerID="b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3" exitCode=0 Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.741892 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggzpb" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.741905 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerDied","Data":"b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3"} Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.742304 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggzpb" event={"ID":"585053d3-2abe-4ade-80fa-ab629a7209bb","Type":"ContainerDied","Data":"4b28c36bf60bff39814fbfc0c2585fc671edfb059d39d8e138f1992e21bbab3b"} Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.742341 4922 scope.go:117] "RemoveContainer" containerID="b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.761433 4922 scope.go:117] "RemoveContainer" containerID="32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.782124 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggzpb"] Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.790944 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ggzpb"] Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.799841 4922 scope.go:117] "RemoveContainer" containerID="79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.832119 4922 scope.go:117] "RemoveContainer" containerID="b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3" Jan 26 15:54:25 crc kubenswrapper[4922]: E0126 15:54:25.832679 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3\": container with ID starting with b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3 not found: ID does not exist" containerID="b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.832728 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3"} err="failed to get container status \"b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3\": rpc error: code = NotFound desc = could not find container \"b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3\": container with ID starting with b425a446b6a5520bc29609c2450d1e25e0a141a5ecb04ac33312b34407bb61c3 not found: ID does not exist" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.832755 4922 scope.go:117] "RemoveContainer" containerID="32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d" Jan 26 15:54:25 crc kubenswrapper[4922]: E0126 15:54:25.833146 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d\": container with ID starting with 32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d not found: ID does not exist" containerID="32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.833171 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d"} err="failed to get container status \"32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d\": rpc error: code = NotFound desc = could not find container \"32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d\": container with ID starting with 32c7fee48324c8e479e4f450dbd6feaff9e81520330f809d3904958687b2a58d not found: ID does not exist" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.833187 4922 scope.go:117] "RemoveContainer" containerID="79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450" Jan 26 15:54:25 crc kubenswrapper[4922]: E0126 15:54:25.833440 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450\": container with ID starting with 79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450 not found: ID does not exist" containerID="79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450" Jan 26 15:54:25 crc kubenswrapper[4922]: I0126 15:54:25.833463 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450"} err="failed to get container status \"79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450\": rpc error: code = NotFound desc = could not find container \"79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450\": container with ID starting with 79f463ed2b1ebbf5cd145b9d23d5ee5513e2675c23ca540288198bc0bfdaf450 not found: ID does not exist" Jan 26 15:54:27 crc kubenswrapper[4922]: I0126 15:54:27.105019 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" path="/var/lib/kubelet/pods/585053d3-2abe-4ade-80fa-ab629a7209bb/volumes" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.845165 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ls27x/must-gather-xctz5"] Jan 26 15:55:51 crc kubenswrapper[4922]: E0126 15:55:51.845974 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="extract-content" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.845985 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="extract-content" Jan 26 15:55:51 crc kubenswrapper[4922]: E0126 15:55:51.846006 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="registry-server" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.846012 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="registry-server" Jan 26 15:55:51 crc kubenswrapper[4922]: E0126 15:55:51.846043 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="extract-utilities" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.846049 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="extract-utilities" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.846234 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="585053d3-2abe-4ade-80fa-ab629a7209bb" containerName="registry-server" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.847324 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.850576 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ls27x"/"openshift-service-ca.crt" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.850969 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ls27x"/"kube-root-ca.crt" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.875933 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ls27x/must-gather-xctz5"] Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.991289 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54kx5\" (UniqueName: \"kubernetes.io/projected/3c8ab648-37d5-445d-89e1-f52381d284e7-kube-api-access-54kx5\") pod \"must-gather-xctz5\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:51 crc kubenswrapper[4922]: I0126 15:55:51.991914 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c8ab648-37d5-445d-89e1-f52381d284e7-must-gather-output\") pod \"must-gather-xctz5\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:52 crc kubenswrapper[4922]: I0126 15:55:52.093617 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c8ab648-37d5-445d-89e1-f52381d284e7-must-gather-output\") pod \"must-gather-xctz5\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:52 crc kubenswrapper[4922]: I0126 15:55:52.094094 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54kx5\" (UniqueName: \"kubernetes.io/projected/3c8ab648-37d5-445d-89e1-f52381d284e7-kube-api-access-54kx5\") pod \"must-gather-xctz5\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:52 crc kubenswrapper[4922]: I0126 15:55:52.094478 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c8ab648-37d5-445d-89e1-f52381d284e7-must-gather-output\") pod \"must-gather-xctz5\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:52 crc kubenswrapper[4922]: I0126 15:55:52.117322 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54kx5\" (UniqueName: \"kubernetes.io/projected/3c8ab648-37d5-445d-89e1-f52381d284e7-kube-api-access-54kx5\") pod \"must-gather-xctz5\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:52 crc kubenswrapper[4922]: I0126 15:55:52.166846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 15:55:52 crc kubenswrapper[4922]: I0126 15:55:52.724155 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ls27x/must-gather-xctz5"] Jan 26 15:55:53 crc kubenswrapper[4922]: I0126 15:55:53.637316 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/must-gather-xctz5" event={"ID":"3c8ab648-37d5-445d-89e1-f52381d284e7","Type":"ContainerStarted","Data":"93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c"} Jan 26 15:55:53 crc kubenswrapper[4922]: I0126 15:55:53.637409 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/must-gather-xctz5" event={"ID":"3c8ab648-37d5-445d-89e1-f52381d284e7","Type":"ContainerStarted","Data":"d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193"} Jan 26 15:55:53 crc kubenswrapper[4922]: I0126 15:55:53.637423 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/must-gather-xctz5" event={"ID":"3c8ab648-37d5-445d-89e1-f52381d284e7","Type":"ContainerStarted","Data":"b33377542f9504f449e3dc00b752fcae3c1a5854f0681080fef4ea72f28acd16"} Jan 26 15:55:53 crc kubenswrapper[4922]: I0126 15:55:53.667782 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ls27x/must-gather-xctz5" podStartSLOduration=2.667708893 podStartE2EDuration="2.667708893s" podCreationTimestamp="2026-01-26 15:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:53.663803326 +0000 UTC m=+6370.866066098" watchObservedRunningTime="2026-01-26 15:55:53.667708893 +0000 UTC m=+6370.869971675" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.325661 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ls27x/crc-debug-jvzrs"] Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.328837 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.332117 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ls27x"/"default-dockercfg-ssrpc" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.444013 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-host\") pod \"crc-debug-jvzrs\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.444197 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgx7\" (UniqueName: \"kubernetes.io/projected/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-kube-api-access-pvgx7\") pod \"crc-debug-jvzrs\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.545747 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgx7\" (UniqueName: \"kubernetes.io/projected/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-kube-api-access-pvgx7\") pod \"crc-debug-jvzrs\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.545892 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-host\") pod \"crc-debug-jvzrs\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.545993 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-host\") pod \"crc-debug-jvzrs\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.569007 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgx7\" (UniqueName: \"kubernetes.io/projected/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-kube-api-access-pvgx7\") pod \"crc-debug-jvzrs\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:57 crc kubenswrapper[4922]: I0126 15:55:57.651160 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:55:58 crc kubenswrapper[4922]: I0126 15:55:58.686557 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" event={"ID":"5d32f8cd-af12-47eb-9c4a-5b71ca91829b","Type":"ContainerStarted","Data":"e5ea08c53040d5dc628273b44b6a6fd064aca350ecf5702f28ce8bc15e9b4f8a"} Jan 26 15:55:58 crc kubenswrapper[4922]: I0126 15:55:58.687188 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" event={"ID":"5d32f8cd-af12-47eb-9c4a-5b71ca91829b","Type":"ContainerStarted","Data":"1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b"} Jan 26 15:55:58 crc kubenswrapper[4922]: I0126 15:55:58.701357 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" podStartSLOduration=1.701338841 podStartE2EDuration="1.701338841s" podCreationTimestamp="2026-01-26 15:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:55:58.699313726 +0000 UTC m=+6375.901576498" watchObservedRunningTime="2026-01-26 15:55:58.701338841 +0000 UTC m=+6375.903601613" Jan 26 15:56:11 crc kubenswrapper[4922]: I0126 15:56:11.306541 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:56:11 crc kubenswrapper[4922]: I0126 15:56:11.307189 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:56:41 crc kubenswrapper[4922]: I0126 15:56:41.306532 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:56:41 crc kubenswrapper[4922]: I0126 15:56:41.306960 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:56:41 crc kubenswrapper[4922]: I0126 15:56:41.415522 4922 generic.go:334] "Generic (PLEG): container finished" podID="5d32f8cd-af12-47eb-9c4a-5b71ca91829b" containerID="e5ea08c53040d5dc628273b44b6a6fd064aca350ecf5702f28ce8bc15e9b4f8a" exitCode=0 Jan 26 15:56:41 crc kubenswrapper[4922]: I0126 15:56:41.415571 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" event={"ID":"5d32f8cd-af12-47eb-9c4a-5b71ca91829b","Type":"ContainerDied","Data":"e5ea08c53040d5dc628273b44b6a6fd064aca350ecf5702f28ce8bc15e9b4f8a"} Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.559616 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.601222 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ls27x/crc-debug-jvzrs"] Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.614638 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ls27x/crc-debug-jvzrs"] Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.643734 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-host\") pod \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.644106 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-host" (OuterVolumeSpecName: "host") pod "5d32f8cd-af12-47eb-9c4a-5b71ca91829b" (UID: "5d32f8cd-af12-47eb-9c4a-5b71ca91829b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.644566 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvgx7\" (UniqueName: \"kubernetes.io/projected/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-kube-api-access-pvgx7\") pod \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\" (UID: \"5d32f8cd-af12-47eb-9c4a-5b71ca91829b\") " Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.646781 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-host\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.664743 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-kube-api-access-pvgx7" (OuterVolumeSpecName: "kube-api-access-pvgx7") pod "5d32f8cd-af12-47eb-9c4a-5b71ca91829b" (UID: "5d32f8cd-af12-47eb-9c4a-5b71ca91829b"). InnerVolumeSpecName "kube-api-access-pvgx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:42 crc kubenswrapper[4922]: I0126 15:56:42.748717 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvgx7\" (UniqueName: \"kubernetes.io/projected/5d32f8cd-af12-47eb-9c4a-5b71ca91829b-kube-api-access-pvgx7\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.108732 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d32f8cd-af12-47eb-9c4a-5b71ca91829b" path="/var/lib/kubelet/pods/5d32f8cd-af12-47eb-9c4a-5b71ca91829b/volumes" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.434727 4922 scope.go:117] "RemoveContainer" containerID="e5ea08c53040d5dc628273b44b6a6fd064aca350ecf5702f28ce8bc15e9b4f8a" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.435193 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-jvzrs" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.812805 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ls27x/crc-debug-dqfzw"] Jan 26 15:56:43 crc kubenswrapper[4922]: E0126 15:56:43.813238 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d32f8cd-af12-47eb-9c4a-5b71ca91829b" containerName="container-00" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.813251 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d32f8cd-af12-47eb-9c4a-5b71ca91829b" containerName="container-00" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.813486 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d32f8cd-af12-47eb-9c4a-5b71ca91829b" containerName="container-00" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.814299 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.817032 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ls27x"/"default-dockercfg-ssrpc" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.973647 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93d66103-0219-4c6d-9964-963841996095-host\") pod \"crc-debug-dqfzw\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:43 crc kubenswrapper[4922]: I0126 15:56:43.973822 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxt49\" (UniqueName: \"kubernetes.io/projected/93d66103-0219-4c6d-9964-963841996095-kube-api-access-hxt49\") pod \"crc-debug-dqfzw\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.076200 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93d66103-0219-4c6d-9964-963841996095-host\") pod \"crc-debug-dqfzw\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.076396 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93d66103-0219-4c6d-9964-963841996095-host\") pod \"crc-debug-dqfzw\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.076405 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxt49\" (UniqueName: \"kubernetes.io/projected/93d66103-0219-4c6d-9964-963841996095-kube-api-access-hxt49\") pod \"crc-debug-dqfzw\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.096879 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxt49\" (UniqueName: \"kubernetes.io/projected/93d66103-0219-4c6d-9964-963841996095-kube-api-access-hxt49\") pod \"crc-debug-dqfzw\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.142762 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:44 crc kubenswrapper[4922]: W0126 15:56:44.168929 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d66103_0219_4c6d_9964_963841996095.slice/crio-1cea21f4fc7418efbc0840f70b4f8c6e8a30be46cc398d619d10ef5d2168adf5 WatchSource:0}: Error finding container 1cea21f4fc7418efbc0840f70b4f8c6e8a30be46cc398d619d10ef5d2168adf5: Status 404 returned error can't find the container with id 1cea21f4fc7418efbc0840f70b4f8c6e8a30be46cc398d619d10ef5d2168adf5 Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.446441 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" event={"ID":"93d66103-0219-4c6d-9964-963841996095","Type":"ContainerStarted","Data":"9dfb2d86e634ea0cff70d96d2345a1939c147b51e42a5df6cb4a66eb467c1efa"} Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.446906 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" event={"ID":"93d66103-0219-4c6d-9964-963841996095","Type":"ContainerStarted","Data":"1cea21f4fc7418efbc0840f70b4f8c6e8a30be46cc398d619d10ef5d2168adf5"} Jan 26 15:56:44 crc kubenswrapper[4922]: I0126 15:56:44.467516 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" podStartSLOduration=1.46749691 podStartE2EDuration="1.46749691s" podCreationTimestamp="2026-01-26 15:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:44.460171511 +0000 UTC m=+6421.662434293" watchObservedRunningTime="2026-01-26 15:56:44.46749691 +0000 UTC m=+6421.669759682" Jan 26 15:56:45 crc kubenswrapper[4922]: I0126 15:56:45.462527 4922 generic.go:334] "Generic (PLEG): container finished" podID="93d66103-0219-4c6d-9964-963841996095" containerID="9dfb2d86e634ea0cff70d96d2345a1939c147b51e42a5df6cb4a66eb467c1efa" exitCode=0 Jan 26 15:56:45 crc kubenswrapper[4922]: I0126 15:56:45.463084 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" event={"ID":"93d66103-0219-4c6d-9964-963841996095","Type":"ContainerDied","Data":"9dfb2d86e634ea0cff70d96d2345a1939c147b51e42a5df6cb4a66eb467c1efa"} Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.615358 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.762152 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93d66103-0219-4c6d-9964-963841996095-host\") pod \"93d66103-0219-4c6d-9964-963841996095\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.762413 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxt49\" (UniqueName: \"kubernetes.io/projected/93d66103-0219-4c6d-9964-963841996095-kube-api-access-hxt49\") pod \"93d66103-0219-4c6d-9964-963841996095\" (UID: \"93d66103-0219-4c6d-9964-963841996095\") " Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.762705 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93d66103-0219-4c6d-9964-963841996095-host" (OuterVolumeSpecName: "host") pod "93d66103-0219-4c6d-9964-963841996095" (UID: "93d66103-0219-4c6d-9964-963841996095"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.763014 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93d66103-0219-4c6d-9964-963841996095-host\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.770842 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d66103-0219-4c6d-9964-963841996095-kube-api-access-hxt49" (OuterVolumeSpecName: "kube-api-access-hxt49") pod "93d66103-0219-4c6d-9964-963841996095" (UID: "93d66103-0219-4c6d-9964-963841996095"). InnerVolumeSpecName "kube-api-access-hxt49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:46 crc kubenswrapper[4922]: I0126 15:56:46.864813 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxt49\" (UniqueName: \"kubernetes.io/projected/93d66103-0219-4c6d-9964-963841996095-kube-api-access-hxt49\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:47 crc kubenswrapper[4922]: I0126 15:56:47.234682 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ls27x/crc-debug-dqfzw"] Jan 26 15:56:47 crc kubenswrapper[4922]: I0126 15:56:47.245591 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ls27x/crc-debug-dqfzw"] Jan 26 15:56:47 crc kubenswrapper[4922]: I0126 15:56:47.489545 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cea21f4fc7418efbc0840f70b4f8c6e8a30be46cc398d619d10ef5d2168adf5" Jan 26 15:56:47 crc kubenswrapper[4922]: I0126 15:56:47.489753 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-dqfzw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.428434 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ls27x/crc-debug-vhshw"] Jan 26 15:56:48 crc kubenswrapper[4922]: E0126 15:56:48.429223 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d66103-0219-4c6d-9964-963841996095" containerName="container-00" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.429239 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d66103-0219-4c6d-9964-963841996095" containerName="container-00" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.429475 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d66103-0219-4c6d-9964-963841996095" containerName="container-00" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.430175 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.432599 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ls27x"/"default-dockercfg-ssrpc" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.518595 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5qh\" (UniqueName: \"kubernetes.io/projected/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-kube-api-access-2l5qh\") pod \"crc-debug-vhshw\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.519314 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-host\") pod \"crc-debug-vhshw\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.622291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-host\") pod \"crc-debug-vhshw\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.622460 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5qh\" (UniqueName: \"kubernetes.io/projected/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-kube-api-access-2l5qh\") pod \"crc-debug-vhshw\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.622761 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-host\") pod \"crc-debug-vhshw\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.649882 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5qh\" (UniqueName: \"kubernetes.io/projected/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-kube-api-access-2l5qh\") pod \"crc-debug-vhshw\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: I0126 15:56:48.747049 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:48 crc kubenswrapper[4922]: W0126 15:56:48.794123 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a1d3c29_207b_42b6_bed5_d47868b4ab8c.slice/crio-bb5272a4fd9d40d1eb5f556ea5bc1c6bfe11cf100f059b4522fe069ce5c8a173 WatchSource:0}: Error finding container bb5272a4fd9d40d1eb5f556ea5bc1c6bfe11cf100f059b4522fe069ce5c8a173: Status 404 returned error can't find the container with id bb5272a4fd9d40d1eb5f556ea5bc1c6bfe11cf100f059b4522fe069ce5c8a173 Jan 26 15:56:49 crc kubenswrapper[4922]: I0126 15:56:49.110538 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d66103-0219-4c6d-9964-963841996095" path="/var/lib/kubelet/pods/93d66103-0219-4c6d-9964-963841996095/volumes" Jan 26 15:56:49 crc kubenswrapper[4922]: E0126 15:56:49.223642 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice/crio-1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a1d3c29_207b_42b6_bed5_d47868b4ab8c.slice/crio-conmon-dbb4d38aeb5c9552eb06f180978fad6a7642cd0e726002dda48fa17937d0a0a5.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:56:49 crc kubenswrapper[4922]: I0126 15:56:49.510837 4922 generic.go:334] "Generic (PLEG): container finished" podID="1a1d3c29-207b-42b6-bed5-d47868b4ab8c" containerID="dbb4d38aeb5c9552eb06f180978fad6a7642cd0e726002dda48fa17937d0a0a5" exitCode=0 Jan 26 15:56:49 crc kubenswrapper[4922]: I0126 15:56:49.510937 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-vhshw" event={"ID":"1a1d3c29-207b-42b6-bed5-d47868b4ab8c","Type":"ContainerDied","Data":"dbb4d38aeb5c9552eb06f180978fad6a7642cd0e726002dda48fa17937d0a0a5"} Jan 26 15:56:49 crc kubenswrapper[4922]: I0126 15:56:49.511291 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/crc-debug-vhshw" event={"ID":"1a1d3c29-207b-42b6-bed5-d47868b4ab8c","Type":"ContainerStarted","Data":"bb5272a4fd9d40d1eb5f556ea5bc1c6bfe11cf100f059b4522fe069ce5c8a173"} Jan 26 15:56:49 crc kubenswrapper[4922]: I0126 15:56:49.553049 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ls27x/crc-debug-vhshw"] Jan 26 15:56:49 crc kubenswrapper[4922]: I0126 15:56:49.565351 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ls27x/crc-debug-vhshw"] Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.640005 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.666339 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l5qh\" (UniqueName: \"kubernetes.io/projected/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-kube-api-access-2l5qh\") pod \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.666461 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-host\") pod \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\" (UID: \"1a1d3c29-207b-42b6-bed5-d47868b4ab8c\") " Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.666731 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-host" (OuterVolumeSpecName: "host") pod "1a1d3c29-207b-42b6-bed5-d47868b4ab8c" (UID: "1a1d3c29-207b-42b6-bed5-d47868b4ab8c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.667313 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-host\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.673230 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-kube-api-access-2l5qh" (OuterVolumeSpecName: "kube-api-access-2l5qh") pod "1a1d3c29-207b-42b6-bed5-d47868b4ab8c" (UID: "1a1d3c29-207b-42b6-bed5-d47868b4ab8c"). InnerVolumeSpecName "kube-api-access-2l5qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:56:50 crc kubenswrapper[4922]: I0126 15:56:50.768892 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l5qh\" (UniqueName: \"kubernetes.io/projected/1a1d3c29-207b-42b6-bed5-d47868b4ab8c-kube-api-access-2l5qh\") on node \"crc\" DevicePath \"\"" Jan 26 15:56:51 crc kubenswrapper[4922]: I0126 15:56:51.106418 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a1d3c29-207b-42b6-bed5-d47868b4ab8c" path="/var/lib/kubelet/pods/1a1d3c29-207b-42b6-bed5-d47868b4ab8c/volumes" Jan 26 15:56:51 crc kubenswrapper[4922]: I0126 15:56:51.531651 4922 scope.go:117] "RemoveContainer" containerID="dbb4d38aeb5c9552eb06f180978fad6a7642cd0e726002dda48fa17937d0a0a5" Jan 26 15:56:51 crc kubenswrapper[4922]: I0126 15:56:51.531710 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/crc-debug-vhshw" Jan 26 15:56:59 crc kubenswrapper[4922]: E0126 15:56:59.485129 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice/crio-1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:57:09 crc kubenswrapper[4922]: E0126 15:57:09.787292 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice/crio-1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.307375 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.307794 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.307833 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.308702 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f"} pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.308746 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" containerID="cri-o://17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" gracePeriod=600 Jan 26 15:57:11 crc kubenswrapper[4922]: E0126 15:57:11.440569 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.730648 4922 generic.go:334] "Generic (PLEG): container finished" podID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" exitCode=0 Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.730908 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerDied","Data":"17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f"} Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.731722 4922 scope.go:117] "RemoveContainer" containerID="2c93d260dd83ee8bf43edcebb93f7b8fbe296ce3987f57568563eeb35729ead9" Jan 26 15:57:11 crc kubenswrapper[4922]: I0126 15:57:11.732528 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:57:11 crc kubenswrapper[4922]: E0126 15:57:11.732957 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:57:20 crc kubenswrapper[4922]: E0126 15:57:20.068095 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice/crio-1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b\": RecentStats: unable to find data in memory cache]" Jan 26 15:57:24 crc kubenswrapper[4922]: I0126 15:57:24.092057 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:57:24 crc kubenswrapper[4922]: E0126 15:57:24.092830 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:57:30 crc kubenswrapper[4922]: E0126 15:57:30.349815 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice/crio-1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:57:33 crc kubenswrapper[4922]: I0126 15:57:33.576560 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75bc76c88b-b6znr_5ed1bf50-3aec-40cc-843f-afe6a0b2027d/barbican-api/0.log" Jan 26 15:57:33 crc kubenswrapper[4922]: I0126 15:57:33.706947 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75bc76c88b-b6znr_5ed1bf50-3aec-40cc-843f-afe6a0b2027d/barbican-api-log/0.log" Jan 26 15:57:33 crc kubenswrapper[4922]: I0126 15:57:33.779299 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f88dc4bbb-qt8fk_94ec145d-ce40-473f-8598-dbf02d89cc44/barbican-keystone-listener/0.log" Jan 26 15:57:33 crc kubenswrapper[4922]: I0126 15:57:33.881719 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f88dc4bbb-qt8fk_94ec145d-ce40-473f-8598-dbf02d89cc44/barbican-keystone-listener-log/0.log" Jan 26 15:57:33 crc kubenswrapper[4922]: I0126 15:57:33.994089 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bff9847b7-bs5nf_857d24cb-db5e-45ef-9b8a-025ee81b0083/barbican-worker-log/0.log" Jan 26 15:57:33 crc kubenswrapper[4922]: I0126 15:57:33.995229 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bff9847b7-bs5nf_857d24cb-db5e-45ef-9b8a-025ee81b0083/barbican-worker/0.log" Jan 26 15:57:34 crc kubenswrapper[4922]: I0126 15:57:34.252813 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x4h65_c6728b4b-8be0-4841-bbd4-0832817d537e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:34 crc kubenswrapper[4922]: I0126 15:57:34.284656 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/ceilometer-central-agent/0.log" Jan 26 15:57:34 crc kubenswrapper[4922]: I0126 15:57:34.422030 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/ceilometer-notification-agent/0.log" Jan 26 15:57:34 crc kubenswrapper[4922]: I0126 15:57:34.459012 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/proxy-httpd/0.log" Jan 26 15:57:34 crc kubenswrapper[4922]: I0126 15:57:34.471394 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_31909446-1712-442b-a346-7b4bb84f8584/sg-core/0.log" Jan 26 15:57:34 crc kubenswrapper[4922]: I0126 15:57:34.726023 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2408f586-2d21-49ee-a728-08b3190483b8/cinder-api-log/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.013849 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_643689f7-b9d6-4f8a-a41b-a2a473973bd2/probe/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.093022 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:57:35 crc kubenswrapper[4922]: E0126 15:57:35.093815 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.111106 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_643689f7-b9d6-4f8a-a41b-a2a473973bd2/cinder-backup/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.344458 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9456dfb4-60f4-440a-b11c-aef57ca86762/probe/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.349728 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9456dfb4-60f4-440a-b11c-aef57ca86762/cinder-scheduler/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.365186 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2408f586-2d21-49ee-a728-08b3190483b8/cinder-api/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.619264 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_25f323f8-db37-45ee-8db5-e3248826d64e/probe/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.809837 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_25f323f8-db37-45ee-8db5-e3248826d64e/cinder-volume/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.874175 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24f527f0-1574-4733-8102-e412468ad8a6/probe/0.log" Jan 26 15:57:35 crc kubenswrapper[4922]: I0126 15:57:35.892620 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24f527f0-1574-4733-8102-e412468ad8a6/cinder-volume/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.019722 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pgvn8_967a08bd-ab17-442c-bc7f-0a37ecd86306/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.104657 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xsptg_cd6ac053-8747-40cb-87df-2ad523dafbf0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.209918 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78d579bbd7-jvjv8_deee4578-fc2b-4162-b39e-012e9b6b2e8a/init/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.382729 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78d579bbd7-jvjv8_deee4578-fc2b-4162-b39e-012e9b6b2e8a/init/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.407572 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gcv99_e8749ec8-770d-498f-9ace-ad44e3385a36/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.576514 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78d579bbd7-jvjv8_deee4578-fc2b-4162-b39e-012e9b6b2e8a/dnsmasq-dns/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.655442 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a055cb9-0f37-4772-a2af-63c1517cb256/glance-httpd/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.687336 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a055cb9-0f37-4772-a2af-63c1517cb256/glance-log/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.873868 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9/glance-httpd/0.log" Jan 26 15:57:36 crc kubenswrapper[4922]: I0126 15:57:36.933000 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_abe7a9f1-b3e5-48c7-9487-dfc57d84a5e9/glance-log/0.log" Jan 26 15:57:37 crc kubenswrapper[4922]: I0126 15:57:37.100158 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c779658fd-pldff_0c995c1b-6b75-4638-a5d7-1df1539dcaeb/horizon/0.log" Jan 26 15:57:37 crc kubenswrapper[4922]: I0126 15:57:37.196648 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5zcdw_dfdf6694-c807-448c-beed-03053e451f2b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:37 crc kubenswrapper[4922]: I0126 15:57:37.424792 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rl9qs_ef2f11f3-ab6f-449f-9bf8-1306119e67ad/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:37 crc kubenswrapper[4922]: I0126 15:57:37.788929 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490661-z2dhr_a1e08b81-5e31-4556-93f4-06430fed0f54/keystone-cron/0.log" Jan 26 15:57:37 crc kubenswrapper[4922]: I0126 15:57:37.939258 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_131b28f9-a3ee-401d-a4e0-f66ec118f156/kube-state-metrics/0.log" Jan 26 15:57:37 crc kubenswrapper[4922]: I0126 15:57:37.997398 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c779658fd-pldff_0c995c1b-6b75-4638-a5d7-1df1539dcaeb/horizon-log/0.log" Jan 26 15:57:38 crc kubenswrapper[4922]: I0126 15:57:38.203400 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8q9b4_eb0a3861-3e56-4795-a6b3-48870bdf183a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:38 crc kubenswrapper[4922]: I0126 15:57:38.237453 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cd8b7c676-rg4sd_aead8c46-9f8b-45dd-9561-b320c5c7bde4/keystone-api/0.log" Jan 26 15:57:38 crc kubenswrapper[4922]: I0126 15:57:38.635218 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hxtx2_dd09600e-1e19-4a04-8e03-12312a20e513/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:38 crc kubenswrapper[4922]: I0126 15:57:38.733806 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cdcc7857-fs8tr_f79e2698-4080-4a22-8110-89e8c7217018/neutron-httpd/0.log" Jan 26 15:57:38 crc kubenswrapper[4922]: I0126 15:57:38.752369 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cdcc7857-fs8tr_f79e2698-4080-4a22-8110-89e8c7217018/neutron-api/0.log" Jan 26 15:57:39 crc kubenswrapper[4922]: I0126 15:57:39.368175 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9d39f315-a7ea-4004-a187-649a4ff3846b/nova-cell0-conductor-conductor/0.log" Jan 26 15:57:39 crc kubenswrapper[4922]: I0126 15:57:39.690186 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b8407196-4ae4-4db4-95d5-3498f9503f5e/nova-cell1-conductor-conductor/0.log" Jan 26 15:57:40 crc kubenswrapper[4922]: I0126 15:57:40.018213 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f83d6fb5-2b7a-4982-9719-3a03aa125f00/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 15:57:40 crc kubenswrapper[4922]: I0126 15:57:40.210923 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-kljvk_91f3b2e5-b22e-4b5c-b5e9-6b0bbf6c559b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:40 crc kubenswrapper[4922]: I0126 15:57:40.291046 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec77602e-4cce-4d70-90ec-6d6adc5f6643/nova-api-log/0.log" Jan 26 15:57:40 crc kubenswrapper[4922]: I0126 15:57:40.583947 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d01bf414-0bdd-49f2-aa15-54f8ddb04d7b/nova-metadata-log/0.log" Jan 26 15:57:40 crc kubenswrapper[4922]: E0126 15:57:40.589603 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d32f8cd_af12_47eb_9c4a_5b71ca91829b.slice/crio-1c8cf030c1988d12f52d9c7a23dc3433ad77a7bf1813216f1058a304a90d6b1b\": RecentStats: unable to find data in memory cache]" Jan 26 15:57:40 crc kubenswrapper[4922]: I0126 15:57:40.980847 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec77602e-4cce-4d70-90ec-6d6adc5f6643/nova-api-api/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.041498 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729a7732-744d-4ef7-b2c5-054f0f5f7f79/mysql-bootstrap/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.119267 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ef2cf2cb-aa43-4e4d-be50-8476dcdc6f77/nova-scheduler-scheduler/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.229252 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729a7732-744d-4ef7-b2c5-054f0f5f7f79/mysql-bootstrap/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.262257 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729a7732-744d-4ef7-b2c5-054f0f5f7f79/galera/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.440538 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_205c6bf6-b838-4bea-9cf8-df9fe42bd53f/mysql-bootstrap/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.651195 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_205c6bf6-b838-4bea-9cf8-df9fe42bd53f/mysql-bootstrap/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.669694 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_205c6bf6-b838-4bea-9cf8-df9fe42bd53f/galera/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.880202 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c962db74-b70e-44df-a3d2-8a2dda688ca8/openstackclient/0.log" Jan 26 15:57:41 crc kubenswrapper[4922]: I0126 15:57:41.892126 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2nwnw_6d76ee55-7df1-42fb-817b-031f44d36f82/openstack-network-exporter/0.log" Jan 26 15:57:42 crc kubenswrapper[4922]: I0126 15:57:42.109975 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovsdb-server-init/0.log" Jan 26 15:57:42 crc kubenswrapper[4922]: I0126 15:57:42.307879 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovsdb-server-init/0.log" Jan 26 15:57:42 crc kubenswrapper[4922]: I0126 15:57:42.322367 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovsdb-server/0.log" Jan 26 15:57:42 crc kubenswrapper[4922]: I0126 15:57:42.562097 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x4rqw_1a2c2044-5422-40dc-92f5-051f1da6b2a2/ovn-controller/0.log" Jan 26 15:57:42 crc kubenswrapper[4922]: I0126 15:57:42.814951 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wvsjz_d6ff5d49-d748-41c2-9893-b3cd1fd09b2d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:42 crc kubenswrapper[4922]: I0126 15:57:42.822674 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fpgzk_7088cbad-121a-40f6-9934-60a62f980b6d/ovs-vswitchd/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.035984 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_44db7ec1-3a40-46de-b048-94191897a988/ovn-northd/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.130985 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_44db7ec1-3a40-46de-b048-94191897a988/openstack-network-exporter/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.293959 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f5cedc59-0829-41da-94bd-17137258865f/openstack-network-exporter/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.335729 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f5cedc59-0829-41da-94bd-17137258865f/ovsdbserver-nb/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.351118 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d01bf414-0bdd-49f2-aa15-54f8ddb04d7b/nova-metadata-metadata/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.554205 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6bc48070-5821-46c0-b06a-d50d64d22e19/openstack-network-exporter/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.581853 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6bc48070-5821-46c0-b06a-d50d64d22e19/ovsdbserver-sb/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.803047 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/init-config-reloader/0.log" Jan 26 15:57:43 crc kubenswrapper[4922]: I0126 15:57:43.931338 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4549854d-f2kpv_75199453-47fb-4d94-ae1d-908c20b64cfd/placement-api/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.012391 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c4549854d-f2kpv_75199453-47fb-4d94-ae1d-908c20b64cfd/placement-log/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.079890 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/init-config-reloader/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.081975 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/config-reloader/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.254773 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/prometheus/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.333922 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34b7c66d-87b0-4db4-aa8c-7dd19293e8fd/setup-container/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.343745 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6bcc6ecd-6484-4c77-9278-970bfe41f0c2/thanos-sidecar/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.603854 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34b7c66d-87b0-4db4-aa8c-7dd19293e8fd/setup-container/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.644349 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34b7c66d-87b0-4db4-aa8c-7dd19293e8fd/rabbitmq/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.657531 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_1881b31a-fd0f-40c8-a098-10888cec43db/setup-container/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.864584 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_1881b31a-fd0f-40c8-a098-10888cec43db/setup-container/0.log" Jan 26 15:57:44 crc kubenswrapper[4922]: I0126 15:57:44.941651 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_1881b31a-fd0f-40c8-a098-10888cec43db/rabbitmq/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.006784 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e3a4bf42-9b24-473a-bca6-f81f1d0884fb/setup-container/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.203480 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e3a4bf42-9b24-473a-bca6-f81f1d0884fb/setup-container/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.337282 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qbl9d_e17ee626-9062-4bd3-8566-93e6160b89bc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.351446 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e3a4bf42-9b24-473a-bca6-f81f1d0884fb/rabbitmq/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.601460 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-l759l_ed501c23-2119-4ef8-9d37-776d3a5c5d6e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.618192 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5n2gc_650a3e49-f342-4b32-940a-2f64bdb45fb3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.833090 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-d9pm5_d0238a10-7400-4e82-ab24-d9f30ee2b02d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:45 crc kubenswrapper[4922]: I0126 15:57:45.912909 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mdksr_377f1114-a9f8-4b98-96c9-71f827483095/ssh-known-hosts-edpm-deployment/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.201528 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69b95496c5-qvg59_a2bcb723-e3e3-41f8-9704-10a1f8e78bd7/proxy-server/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.388595 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9mb5n_a8cb73d7-e172-413f-ad9b-9fdf5afcb2eb/swift-ring-rebalance/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.399628 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-69b95496c5-qvg59_a2bcb723-e3e3-41f8-9704-10a1f8e78bd7/proxy-httpd/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.498673 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-auditor/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.615408 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-reaper/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.704521 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-replicator/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.714051 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/account-server/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.721842 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-auditor/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.855604 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-replicator/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.887557 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-server/0.log" Jan 26 15:57:46 crc kubenswrapper[4922]: I0126 15:57:46.970282 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/container-updater/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.026365 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-auditor/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.033823 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-expirer/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.092400 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:57:47 crc kubenswrapper[4922]: E0126 15:57:47.092661 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.174151 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-replicator/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.185192 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-server/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.226116 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/rsync/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.256730 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/object-updater/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.350621 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03d225b5-5466-45de-9417-54a11fa79429/swift-recon-cron/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.553489 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8nddh_4eaec89f-007e-4ecf-a60f-f9f6729dfe13/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.658529 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_29bf7bdf-8c0e-4e1c-812d-1220cc968575/tempest-tests-tempest-tests-runner/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.791491 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b8a7c013-fdc7-4f64-b17c-b48b89eda7f6/test-operator-logs-container/0.log" Jan 26 15:57:47 crc kubenswrapper[4922]: I0126 15:57:47.920697 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lqs8f_460930ff-ef82-4c8d-8f3b-36551f8fb401/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 15:57:48 crc kubenswrapper[4922]: I0126 15:57:48.718999 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_6d5cf795-cb42-4d01-8121-5ef71cedd729/watcher-applier/0.log" Jan 26 15:57:49 crc kubenswrapper[4922]: I0126 15:57:49.341809 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7b5e0a69-30c9-435f-a566-b97de4e1b850/watcher-api-log/0.log" Jan 26 15:57:52 crc kubenswrapper[4922]: I0126 15:57:52.316524 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_94f7f2e8-6663-40d0-b4f2-3c2f5f79b8c2/watcher-decision-engine/0.log" Jan 26 15:57:54 crc kubenswrapper[4922]: I0126 15:57:54.245732 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7b5e0a69-30c9-435f-a566-b97de4e1b850/watcher-api/0.log" Jan 26 15:57:55 crc kubenswrapper[4922]: I0126 15:57:55.220798 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9cb3b1de-0efe-4de9-9e48-6f2f6885c197/memcached/0.log" Jan 26 15:58:02 crc kubenswrapper[4922]: I0126 15:58:02.092647 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:58:02 crc kubenswrapper[4922]: E0126 15:58:02.093566 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.060289 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7pbh"] Jan 26 15:58:10 crc kubenswrapper[4922]: E0126 15:58:10.061442 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1d3c29-207b-42b6-bed5-d47868b4ab8c" containerName="container-00" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.061461 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1d3c29-207b-42b6-bed5-d47868b4ab8c" containerName="container-00" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.061794 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1d3c29-207b-42b6-bed5-d47868b4ab8c" containerName="container-00" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.063730 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.074427 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbh"] Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.149902 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-catalog-content\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.149962 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p7fl\" (UniqueName: \"kubernetes.io/projected/63543027-fc2f-49c4-a703-3c967921646f-kube-api-access-4p7fl\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.149984 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-utilities\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.251414 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-catalog-content\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.251457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p7fl\" (UniqueName: \"kubernetes.io/projected/63543027-fc2f-49c4-a703-3c967921646f-kube-api-access-4p7fl\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.251484 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-utilities\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.252155 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-catalog-content\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.252243 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-utilities\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.270580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p7fl\" (UniqueName: \"kubernetes.io/projected/63543027-fc2f-49c4-a703-3c967921646f-kube-api-access-4p7fl\") pod \"redhat-operators-s7pbh\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.385044 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:10 crc kubenswrapper[4922]: I0126 15:58:10.986603 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbh"] Jan 26 15:58:11 crc kubenswrapper[4922]: I0126 15:58:11.436212 4922 generic.go:334] "Generic (PLEG): container finished" podID="63543027-fc2f-49c4-a703-3c967921646f" containerID="7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259" exitCode=0 Jan 26 15:58:11 crc kubenswrapper[4922]: I0126 15:58:11.436267 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerDied","Data":"7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259"} Jan 26 15:58:11 crc kubenswrapper[4922]: I0126 15:58:11.436310 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerStarted","Data":"9dc2567b8fe794837121f17838ab319795814616d7e284179f9d247d56d5426e"} Jan 26 15:58:11 crc kubenswrapper[4922]: I0126 15:58:11.438203 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:58:11 crc kubenswrapper[4922]: E0126 15:58:11.489964 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63543027_fc2f_49c4_a703_3c967921646f.slice/crio-7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63543027_fc2f_49c4_a703_3c967921646f.slice/crio-conmon-7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:58:13 crc kubenswrapper[4922]: I0126 15:58:13.456368 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerStarted","Data":"d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9"} Jan 26 15:58:16 crc kubenswrapper[4922]: I0126 15:58:16.091947 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:58:16 crc kubenswrapper[4922]: E0126 15:58:16.092576 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:58:17 crc kubenswrapper[4922]: I0126 15:58:17.499090 4922 generic.go:334] "Generic (PLEG): container finished" podID="63543027-fc2f-49c4-a703-3c967921646f" containerID="d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9" exitCode=0 Jan 26 15:58:17 crc kubenswrapper[4922]: I0126 15:58:17.499264 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerDied","Data":"d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9"} Jan 26 15:58:17 crc kubenswrapper[4922]: I0126 15:58:17.631736 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/util/0.log" Jan 26 15:58:17 crc kubenswrapper[4922]: I0126 15:58:17.799305 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/pull/0.log" Jan 26 15:58:17 crc kubenswrapper[4922]: I0126 15:58:17.807233 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/util/0.log" Jan 26 15:58:17 crc kubenswrapper[4922]: I0126 15:58:17.820649 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/pull/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.004312 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/extract/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.024713 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/pull/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.031501 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_16e438577af5a51eabca8c42921e9e9eba7c4478d059c16a645f89ca52l466j_d84c8528-9f70-476f-a622-90992fd49e69/util/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.443756 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-sthwh_98d7d86a-4bc1-4165-9dc5-3260b879df04/manager/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.775249 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-6pjpc_edd25ba7-355c-48aa-a7f5-0a60df9f1307/manager/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.789054 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-9xxqk_3626ad2a-98c3-4f78-9fa5-e7c32e81fa1e/manager/0.log" Jan 26 15:58:18 crc kubenswrapper[4922]: I0126 15:58:18.797343 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-d4kf8_2dc5ea59-1467-4fec-b933-e144ea4fda4a/manager/0.log" Jan 26 15:58:19 crc kubenswrapper[4922]: I0126 15:58:19.044619 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-grh8g_eb93770e-722e-474d-93ef-5767d506fbf5/manager/0.log" Jan 26 15:58:19 crc kubenswrapper[4922]: I0126 15:58:19.051906 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fr5t7_2203be8d-8aa1-4617-8297-c715783969a6/manager/0.log" Jan 26 15:58:19 crc kubenswrapper[4922]: I0126 15:58:19.254728 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-vbmsc_3e944bff-02ee-4d1d-948b-350795772f18/manager/0.log" Jan 26 15:58:19 crc kubenswrapper[4922]: I0126 15:58:19.523107 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerStarted","Data":"79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd"} Jan 26 15:58:19 crc kubenswrapper[4922]: I0126 15:58:19.552083 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7pbh" podStartSLOduration=2.266375953 podStartE2EDuration="9.552048097s" podCreationTimestamp="2026-01-26 15:58:10 +0000 UTC" firstStartedPulling="2026-01-26 15:58:11.437914074 +0000 UTC m=+6508.640176846" lastFinishedPulling="2026-01-26 15:58:18.723586218 +0000 UTC m=+6515.925848990" observedRunningTime="2026-01-26 15:58:19.544751369 +0000 UTC m=+6516.747014141" watchObservedRunningTime="2026-01-26 15:58:19.552048097 +0000 UTC m=+6516.754310869" Jan 26 15:58:19 crc kubenswrapper[4922]: I0126 15:58:19.663781 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-lvrq9_aa376169-3b34-4289-b339-14fc6f14a0e9/manager/0.log" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.385477 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.406927 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.632419 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-xwz7c_ae2af37b-8945-48b3-8ed3-c2412b39c897/manager/0.log" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.644118 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-wxlhk_fa8912b6-c04f-4a1e-bb7a-8cae762f00ab/manager/0.log" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.684615 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-s2hjs_03233631-2567-42a5-af70-861afeefbba3/manager/0.log" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.982432 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-n9bp2_f3aabab5-bdde-4359-b011-5887666ee21a/manager/0.log" Jan 26 15:58:20 crc kubenswrapper[4922]: I0126 15:58:20.996744 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-vg2c5_3ed4f8f8-86dd-4331-b60d-ac713fe8be31/manager/0.log" Jan 26 15:58:21 crc kubenswrapper[4922]: I0126 15:58:21.157331 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-npcs6_94e756c6-328c-4065-9d81-2cd1f5293a0a/manager/0.log" Jan 26 15:58:21 crc kubenswrapper[4922]: I0126 15:58:21.169187 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmd2x_e741d752-bf89-4fc2-a173-98a5e6257ffc/manager/0.log" Jan 26 15:58:21 crc kubenswrapper[4922]: I0126 15:58:21.630262 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7pbh" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="registry-server" probeResult="failure" output=< Jan 26 15:58:21 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Jan 26 15:58:21 crc kubenswrapper[4922]: > Jan 26 15:58:21 crc kubenswrapper[4922]: I0126 15:58:21.664445 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-c7fd5fdf7-4dsfg_5857d460-cbd7-4dba-b280-e791678bc021/operator/0.log" Jan 26 15:58:21 crc kubenswrapper[4922]: I0126 15:58:21.812207 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v57b5_64ecb550-f2ab-4e01-88b7-e8059bd434ff/registry-server/0.log" Jan 26 15:58:22 crc kubenswrapper[4922]: I0126 15:58:22.279570 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-7w6r2_f3a5936d-5620-4b92-92ef-71b8387e019e/manager/0.log" Jan 26 15:58:22 crc kubenswrapper[4922]: I0126 15:58:22.409702 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-2mvbc_d487712f-146f-4342-a84e-6dca10b381fe/manager/0.log" Jan 26 15:58:22 crc kubenswrapper[4922]: I0126 15:58:22.584972 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fwdn9_f78795e3-4b41-43ec-b56d-37745dd146cd/operator/0.log" Jan 26 15:58:22 crc kubenswrapper[4922]: I0126 15:58:22.692994 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-h4zkz_9a7bfd31-9ad5-4f6d-b9e4-ea6df606d143/manager/0.log" Jan 26 15:58:22 crc kubenswrapper[4922]: I0126 15:58:22.872682 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b6496445-44795_2de91e12-3fbb-48e3-ac0f-55d98628405e/manager/0.log" Jan 26 15:58:23 crc kubenswrapper[4922]: I0126 15:58:23.017178 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8tk4x_542bee92-421c-4969-9fb8-da684d74ab1d/manager/0.log" Jan 26 15:58:23 crc kubenswrapper[4922]: I0126 15:58:23.058449 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-wpq74_3732fc65-c182-42c3-9a98-b9aff1d49a1d/manager/0.log" Jan 26 15:58:23 crc kubenswrapper[4922]: I0126 15:58:23.189635 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75d4cf59bb-dctt2_033a8dae-299b-49cc-a63e-2d4bf250488c/manager/0.log" Jan 26 15:58:28 crc kubenswrapper[4922]: I0126 15:58:28.093014 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:58:28 crc kubenswrapper[4922]: E0126 15:58:28.093848 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:58:30 crc kubenswrapper[4922]: I0126 15:58:30.440860 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:30 crc kubenswrapper[4922]: I0126 15:58:30.495424 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:30 crc kubenswrapper[4922]: I0126 15:58:30.680086 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbh"] Jan 26 15:58:31 crc kubenswrapper[4922]: I0126 15:58:31.650138 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7pbh" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="registry-server" containerID="cri-o://79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd" gracePeriod=2 Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.228381 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.380272 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-utilities\") pod \"63543027-fc2f-49c4-a703-3c967921646f\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.380438 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p7fl\" (UniqueName: \"kubernetes.io/projected/63543027-fc2f-49c4-a703-3c967921646f-kube-api-access-4p7fl\") pod \"63543027-fc2f-49c4-a703-3c967921646f\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.381086 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-utilities" (OuterVolumeSpecName: "utilities") pod "63543027-fc2f-49c4-a703-3c967921646f" (UID: "63543027-fc2f-49c4-a703-3c967921646f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.382257 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-catalog-content\") pod \"63543027-fc2f-49c4-a703-3c967921646f\" (UID: \"63543027-fc2f-49c4-a703-3c967921646f\") " Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.383897 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.404001 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63543027-fc2f-49c4-a703-3c967921646f-kube-api-access-4p7fl" (OuterVolumeSpecName: "kube-api-access-4p7fl") pod "63543027-fc2f-49c4-a703-3c967921646f" (UID: "63543027-fc2f-49c4-a703-3c967921646f"). InnerVolumeSpecName "kube-api-access-4p7fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.486191 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p7fl\" (UniqueName: \"kubernetes.io/projected/63543027-fc2f-49c4-a703-3c967921646f-kube-api-access-4p7fl\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.537294 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63543027-fc2f-49c4-a703-3c967921646f" (UID: "63543027-fc2f-49c4-a703-3c967921646f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.594350 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63543027-fc2f-49c4-a703-3c967921646f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.664225 4922 generic.go:334] "Generic (PLEG): container finished" podID="63543027-fc2f-49c4-a703-3c967921646f" containerID="79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd" exitCode=0 Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.664308 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerDied","Data":"79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd"} Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.664323 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbh" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.664362 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbh" event={"ID":"63543027-fc2f-49c4-a703-3c967921646f","Type":"ContainerDied","Data":"9dc2567b8fe794837121f17838ab319795814616d7e284179f9d247d56d5426e"} Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.664388 4922 scope.go:117] "RemoveContainer" containerID="79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.689731 4922 scope.go:117] "RemoveContainer" containerID="d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.705841 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbh"] Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.721276 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbh"] Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.754176 4922 scope.go:117] "RemoveContainer" containerID="7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.793802 4922 scope.go:117] "RemoveContainer" containerID="79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd" Jan 26 15:58:32 crc kubenswrapper[4922]: E0126 15:58:32.794330 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd\": container with ID starting with 79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd not found: ID does not exist" containerID="79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.794372 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd"} err="failed to get container status \"79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd\": rpc error: code = NotFound desc = could not find container \"79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd\": container with ID starting with 79fe48f79d5befc77d2576a64bcf4f17baa166b6336bf609bfed890ce52c58dd not found: ID does not exist" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.794400 4922 scope.go:117] "RemoveContainer" containerID="d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9" Jan 26 15:58:32 crc kubenswrapper[4922]: E0126 15:58:32.794697 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9\": container with ID starting with d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9 not found: ID does not exist" containerID="d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.794723 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9"} err="failed to get container status \"d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9\": rpc error: code = NotFound desc = could not find container \"d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9\": container with ID starting with d4171498074809f037446b489da0bac63c030ce628735465f25bb50178bdeed9 not found: ID does not exist" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.794739 4922 scope.go:117] "RemoveContainer" containerID="7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259" Jan 26 15:58:32 crc kubenswrapper[4922]: E0126 15:58:32.795102 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259\": container with ID starting with 7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259 not found: ID does not exist" containerID="7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259" Jan 26 15:58:32 crc kubenswrapper[4922]: I0126 15:58:32.795129 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259"} err="failed to get container status \"7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259\": rpc error: code = NotFound desc = could not find container \"7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259\": container with ID starting with 7c48783515afcf0ba7061b0be5723e2852396bc03f62970e078b456c1644c259 not found: ID does not exist" Jan 26 15:58:33 crc kubenswrapper[4922]: I0126 15:58:33.103654 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63543027-fc2f-49c4-a703-3c967921646f" path="/var/lib/kubelet/pods/63543027-fc2f-49c4-a703-3c967921646f/volumes" Jan 26 15:58:39 crc kubenswrapper[4922]: I0126 15:58:39.092917 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:58:39 crc kubenswrapper[4922]: E0126 15:58:39.093763 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:58:42 crc kubenswrapper[4922]: I0126 15:58:42.051369 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qhcrv_3529a429-628d-4c73-aaad-ee3719ea2022/control-plane-machine-set-operator/0.log" Jan 26 15:58:42 crc kubenswrapper[4922]: I0126 15:58:42.313901 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s5rq6_3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f/kube-rbac-proxy/0.log" Jan 26 15:58:42 crc kubenswrapper[4922]: I0126 15:58:42.345886 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s5rq6_3ca0d4fd-2ff9-4ad3-a34e-2a2030c6293f/machine-api-operator/0.log" Jan 26 15:58:50 crc kubenswrapper[4922]: I0126 15:58:50.092725 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:58:50 crc kubenswrapper[4922]: E0126 15:58:50.093588 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:58:54 crc kubenswrapper[4922]: I0126 15:58:54.720006 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vv74v_0ac6b35b-af7a-4913-985e-8d42d2f246f9/cert-manager-controller/0.log" Jan 26 15:58:54 crc kubenswrapper[4922]: I0126 15:58:54.906328 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vw7ht_90a11b15-590d-43f0-957a-67389e3cd75b/cert-manager-cainjector/0.log" Jan 26 15:58:54 crc kubenswrapper[4922]: I0126 15:58:54.937444 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-hdzlp_0cfa6d3f-9300-4d9a-b0d7-c1c321bb0124/cert-manager-webhook/0.log" Jan 26 15:59:02 crc kubenswrapper[4922]: I0126 15:59:02.093761 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:59:02 crc kubenswrapper[4922]: E0126 15:59:02.094699 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:59:07 crc kubenswrapper[4922]: I0126 15:59:07.856190 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-rhptb_fedbbbac-c62a-46aa-adfd-4bed0c5282fc/nmstate-console-plugin/0.log" Jan 26 15:59:08 crc kubenswrapper[4922]: I0126 15:59:08.059631 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-99g5t_ca3a7e5f-211d-40ef-bfb8-261b1af52cda/kube-rbac-proxy/0.log" Jan 26 15:59:08 crc kubenswrapper[4922]: I0126 15:59:08.073552 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c6w6t_594847ad-6266-4357-a47a-aa6383207517/nmstate-handler/0.log" Jan 26 15:59:08 crc kubenswrapper[4922]: I0126 15:59:08.195875 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-99g5t_ca3a7e5f-211d-40ef-bfb8-261b1af52cda/nmstate-metrics/0.log" Jan 26 15:59:08 crc kubenswrapper[4922]: I0126 15:59:08.258369 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7mfk9_95cb5278-d3ed-40e1-8d00-6dd6acbedd3d/nmstate-operator/0.log" Jan 26 15:59:08 crc kubenswrapper[4922]: I0126 15:59:08.430050 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-w7dvs_e02060f5-4687-4f14-9e1a-d94d855d5563/nmstate-webhook/0.log" Jan 26 15:59:15 crc kubenswrapper[4922]: I0126 15:59:15.094774 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:59:15 crc kubenswrapper[4922]: E0126 15:59:15.095745 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:59:23 crc kubenswrapper[4922]: I0126 15:59:23.405185 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xgf5b_3249a43d-d843-43c3-b922-be437eabb548/prometheus-operator/0.log" Jan 26 15:59:23 crc kubenswrapper[4922]: I0126 15:59:23.598594 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9_7289018a-d6ca-4075-b586-e180be982247/prometheus-operator-admission-webhook/0.log" Jan 26 15:59:23 crc kubenswrapper[4922]: I0126 15:59:23.706518 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv_e30b09af-aae4-4f17-ab60-25f6f3dca352/prometheus-operator-admission-webhook/0.log" Jan 26 15:59:23 crc kubenswrapper[4922]: I0126 15:59:23.827942 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mct2h_157e0710-b880-4501-99ad-864b2f70cef5/operator/0.log" Jan 26 15:59:23 crc kubenswrapper[4922]: I0126 15:59:23.911596 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7gzrt_5b04fb53-39bc-4552-b7af-39e57a4102df/perses-operator/0.log" Jan 26 15:59:26 crc kubenswrapper[4922]: I0126 15:59:26.092871 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:59:26 crc kubenswrapper[4922]: E0126 15:59:26.094778 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:59:38 crc kubenswrapper[4922]: I0126 15:59:38.664238 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkg52_bac401ac-4b21-403d-a9e0-808c69a6e0e6/kube-rbac-proxy/0.log" Jan 26 15:59:38 crc kubenswrapper[4922]: I0126 15:59:38.751391 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bkg52_bac401ac-4b21-403d-a9e0-808c69a6e0e6/controller/0.log" Jan 26 15:59:38 crc kubenswrapper[4922]: I0126 15:59:38.911824 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.111374 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.129972 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.144650 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.157653 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.328326 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.370736 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.386199 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.393077 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.538637 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-metrics/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.573406 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/controller/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.583863 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-reloader/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.597656 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/cp-frr-files/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.760563 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/frr-metrics/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.818994 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/kube-rbac-proxy-frr/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.831241 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/kube-rbac-proxy/0.log" Jan 26 15:59:39 crc kubenswrapper[4922]: I0126 15:59:39.991424 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/reloader/0.log" Jan 26 15:59:40 crc kubenswrapper[4922]: I0126 15:59:40.092021 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:59:40 crc kubenswrapper[4922]: E0126 15:59:40.092500 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:59:40 crc kubenswrapper[4922]: I0126 15:59:40.094469 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-gl477_c35b1371-974a-4a0f-b8a4-d7bf024090aa/frr-k8s-webhook-server/0.log" Jan 26 15:59:40 crc kubenswrapper[4922]: I0126 15:59:40.338371 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c599ccf7c-52gjl_74e0af8f-5a55-4376-912d-095cd5078f93/manager/0.log" Jan 26 15:59:40 crc kubenswrapper[4922]: I0126 15:59:40.536699 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc78bf7bd-42g89_b7938fe0-7f27-49c6-959d-62405a4847f1/webhook-server/0.log" Jan 26 15:59:40 crc kubenswrapper[4922]: I0126 15:59:40.553935 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q7phl_2a55e4e3-80c5-4e46-8916-5a306903ce70/kube-rbac-proxy/0.log" Jan 26 15:59:41 crc kubenswrapper[4922]: I0126 15:59:41.263399 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q7phl_2a55e4e3-80c5-4e46-8916-5a306903ce70/speaker/0.log" Jan 26 15:59:41 crc kubenswrapper[4922]: I0126 15:59:41.669569 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kkr8x_5aeb84a2-be84-4867-a141-e879208736c4/frr/0.log" Jan 26 15:59:52 crc kubenswrapper[4922]: I0126 15:59:52.092548 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 15:59:52 crc kubenswrapper[4922]: E0126 15:59:52.093419 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 15:59:54 crc kubenswrapper[4922]: I0126 15:59:54.735195 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/util/0.log" Jan 26 15:59:54 crc kubenswrapper[4922]: I0126 15:59:54.916652 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/util/0.log" Jan 26 15:59:54 crc kubenswrapper[4922]: I0126 15:59:54.963905 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/pull/0.log" Jan 26 15:59:54 crc kubenswrapper[4922]: I0126 15:59:54.965921 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/pull/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.152779 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/util/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.153195 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/pull/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.153509 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfsq89_799897f3-cce8-4769-8763-905e8e372ffb/extract/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.313501 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/util/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.511281 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/util/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.515118 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/pull/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.516618 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/pull/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.753887 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/util/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.792916 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/extract/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.809782 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713dftnx_0965b8eb-299c-4245-a5e2-a695e6011131/pull/0.log" Jan 26 15:59:55 crc kubenswrapper[4922]: I0126 15:59:55.940680 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/util/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.120289 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/pull/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.124775 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/util/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.132532 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/pull/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.283330 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/pull/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.291446 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/util/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.322997 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6dp7_c45425cd-fcc2-44ca-9f6f-6e1c9296ef66/extract/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.470005 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-utilities/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.666157 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-content/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.676562 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-utilities/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.686608 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-content/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.881824 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-utilities/0.log" Jan 26 15:59:56 crc kubenswrapper[4922]: I0126 15:59:56.945929 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/extract-content/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.120675 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-utilities/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.301239 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xmwkw_c8282d89-3d21-4ad5-b707-f00019cc6e70/registry-server/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.356154 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-content/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.366023 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-utilities/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.458381 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-content/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.672141 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-content/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.682204 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/extract-utilities/0.log" Jan 26 15:59:57 crc kubenswrapper[4922]: I0126 15:59:57.937576 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tjq29_a88a2014-3fba-45e3-bc74-1b2c803c10b5/marketplace-operator/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.074750 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-utilities/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.324441 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-utilities/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.349310 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-content/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.362849 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-content/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.395924 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2gwq_e57f87ec-2866-4694-b3f4-0907ca749e1e/registry-server/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.576818 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-utilities/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.603968 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/extract-content/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.783568 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-utilities/0.log" Jan 26 15:59:58 crc kubenswrapper[4922]: I0126 15:59:58.877919 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w4dhl_bb7e36f7-a7c0-4fff-8b81-77738ded90e2/registry-server/0.log" Jan 26 15:59:59 crc kubenswrapper[4922]: I0126 15:59:59.011993 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-content/0.log" Jan 26 15:59:59 crc kubenswrapper[4922]: I0126 15:59:59.037432 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-utilities/0.log" Jan 26 15:59:59 crc kubenswrapper[4922]: I0126 15:59:59.045556 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-content/0.log" Jan 26 15:59:59 crc kubenswrapper[4922]: I0126 15:59:59.202613 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-utilities/0.log" Jan 26 15:59:59 crc kubenswrapper[4922]: I0126 15:59:59.226493 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/extract-content/0.log" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.061556 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kj8wd_8f24cd2f-292d-4fd4-9239-c18de70680ad/registry-server/0.log" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.146534 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk"] Jan 26 16:00:00 crc kubenswrapper[4922]: E0126 16:00:00.147122 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.147147 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4922]: E0126 16:00:00.147166 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="extract-utilities" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.147175 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="extract-utilities" Jan 26 16:00:00 crc kubenswrapper[4922]: E0126 16:00:00.147210 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="extract-content" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.147223 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="extract-content" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.147481 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="63543027-fc2f-49c4-a703-3c967921646f" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.148317 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.150142 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.150261 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.161707 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk"] Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.267329 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-config-volume\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.267367 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-kube-api-access-tg4xk\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.267482 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-secret-volume\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.369865 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-config-volume\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.369919 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-kube-api-access-tg4xk\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.370041 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-secret-volume\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.370966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-config-volume\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.390430 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-kube-api-access-tg4xk\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.390681 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-secret-volume\") pod \"collect-profiles-29490720-5thvk\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:00 crc kubenswrapper[4922]: I0126 16:00:00.524720 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:01 crc kubenswrapper[4922]: I0126 16:00:01.002150 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk"] Jan 26 16:00:01 crc kubenswrapper[4922]: I0126 16:00:01.493964 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" event={"ID":"42fd3f38-5c57-4839-ba71-b9d4ed1c231e","Type":"ContainerStarted","Data":"6734dd530e7d087ca06d3757ed321b31cc75ccfbb384cc722762affb283a6264"} Jan 26 16:00:01 crc kubenswrapper[4922]: I0126 16:00:01.494329 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" event={"ID":"42fd3f38-5c57-4839-ba71-b9d4ed1c231e","Type":"ContainerStarted","Data":"e0a608ca1b5ca6616aa031d1addfcf2f6ef56a761b0ae024f3f83187b787ab7a"} Jan 26 16:00:01 crc kubenswrapper[4922]: I0126 16:00:01.508711 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" podStartSLOduration=1.5086926809999999 podStartE2EDuration="1.508692681s" podCreationTimestamp="2026-01-26 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:01.507476369 +0000 UTC m=+6618.709739141" watchObservedRunningTime="2026-01-26 16:00:01.508692681 +0000 UTC m=+6618.710955443" Jan 26 16:00:02 crc kubenswrapper[4922]: I0126 16:00:02.504851 4922 generic.go:334] "Generic (PLEG): container finished" podID="42fd3f38-5c57-4839-ba71-b9d4ed1c231e" containerID="6734dd530e7d087ca06d3757ed321b31cc75ccfbb384cc722762affb283a6264" exitCode=0 Jan 26 16:00:02 crc kubenswrapper[4922]: I0126 16:00:02.504938 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" event={"ID":"42fd3f38-5c57-4839-ba71-b9d4ed1c231e","Type":"ContainerDied","Data":"6734dd530e7d087ca06d3757ed321b31cc75ccfbb384cc722762affb283a6264"} Jan 26 16:00:03 crc kubenswrapper[4922]: I0126 16:00:03.930517 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.054656 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-config-volume\") pod \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.055126 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-secret-volume\") pod \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.055186 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-kube-api-access-tg4xk\") pod \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\" (UID: \"42fd3f38-5c57-4839-ba71-b9d4ed1c231e\") " Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.055261 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-config-volume" (OuterVolumeSpecName: "config-volume") pod "42fd3f38-5c57-4839-ba71-b9d4ed1c231e" (UID: "42fd3f38-5c57-4839-ba71-b9d4ed1c231e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.061821 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42fd3f38-5c57-4839-ba71-b9d4ed1c231e" (UID: "42fd3f38-5c57-4839-ba71-b9d4ed1c231e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.066255 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-kube-api-access-tg4xk" (OuterVolumeSpecName: "kube-api-access-tg4xk") pod "42fd3f38-5c57-4839-ba71-b9d4ed1c231e" (UID: "42fd3f38-5c57-4839-ba71-b9d4ed1c231e"). InnerVolumeSpecName "kube-api-access-tg4xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.162647 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.162685 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.162699 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg4xk\" (UniqueName: \"kubernetes.io/projected/42fd3f38-5c57-4839-ba71-b9d4ed1c231e-kube-api-access-tg4xk\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.531285 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.531350 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-5thvk" event={"ID":"42fd3f38-5c57-4839-ba71-b9d4ed1c231e","Type":"ContainerDied","Data":"e0a608ca1b5ca6616aa031d1addfcf2f6ef56a761b0ae024f3f83187b787ab7a"} Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.531421 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a608ca1b5ca6616aa031d1addfcf2f6ef56a761b0ae024f3f83187b787ab7a" Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.625344 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2"] Jan 26 16:00:04 crc kubenswrapper[4922]: I0126 16:00:04.633964 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-wktr2"] Jan 26 16:00:05 crc kubenswrapper[4922]: I0126 16:00:05.104884 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbf3a22-f737-472e-9364-4a03d629df67" path="/var/lib/kubelet/pods/dcbf3a22-f737-472e-9364-4a03d629df67/volumes" Jan 26 16:00:06 crc kubenswrapper[4922]: I0126 16:00:06.092706 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:00:06 crc kubenswrapper[4922]: E0126 16:00:06.093359 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:00:13 crc kubenswrapper[4922]: I0126 16:00:13.175527 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xgf5b_3249a43d-d843-43c3-b922-be437eabb548/prometheus-operator/0.log" Jan 26 16:00:13 crc kubenswrapper[4922]: I0126 16:00:13.216789 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-r8lnv_e30b09af-aae4-4f17-ab60-25f6f3dca352/prometheus-operator-admission-webhook/0.log" Jan 26 16:00:13 crc kubenswrapper[4922]: I0126 16:00:13.245944 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78cff5b5bc-lszs9_7289018a-d6ca-4075-b586-e180be982247/prometheus-operator-admission-webhook/0.log" Jan 26 16:00:13 crc kubenswrapper[4922]: I0126 16:00:13.422497 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7gzrt_5b04fb53-39bc-4552-b7af-39e57a4102df/perses-operator/0.log" Jan 26 16:00:13 crc kubenswrapper[4922]: I0126 16:00:13.443506 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mct2h_157e0710-b880-4501-99ad-864b2f70cef5/operator/0.log" Jan 26 16:00:18 crc kubenswrapper[4922]: I0126 16:00:18.092602 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:00:18 crc kubenswrapper[4922]: E0126 16:00:18.093540 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:00:22 crc kubenswrapper[4922]: I0126 16:00:22.107708 4922 scope.go:117] "RemoveContainer" containerID="c6c7ed5e7f3c8fbe07d238013bad7344902e5398376ce2f33218cfd27abef5aa" Jan 26 16:00:31 crc kubenswrapper[4922]: I0126 16:00:31.092744 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:00:31 crc kubenswrapper[4922]: E0126 16:00:31.093538 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:00:43 crc kubenswrapper[4922]: I0126 16:00:43.103778 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:00:43 crc kubenswrapper[4922]: E0126 16:00:43.104503 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:00:55 crc kubenswrapper[4922]: I0126 16:00:55.093460 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:00:55 crc kubenswrapper[4922]: E0126 16:00:55.094339 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.163179 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490721-mcdhl"] Jan 26 16:01:00 crc kubenswrapper[4922]: E0126 16:01:00.164129 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fd3f38-5c57-4839-ba71-b9d4ed1c231e" containerName="collect-profiles" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.164148 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fd3f38-5c57-4839-ba71-b9d4ed1c231e" containerName="collect-profiles" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.164436 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fd3f38-5c57-4839-ba71-b9d4ed1c231e" containerName="collect-profiles" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.165376 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.182034 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-mcdhl"] Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.312440 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-config-data\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.312578 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-fernet-keys\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.312659 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwkjz\" (UniqueName: \"kubernetes.io/projected/2a61314f-c3f6-434c-be20-0937b2cba8b9-kube-api-access-hwkjz\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.312999 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-combined-ca-bundle\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.415252 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-config-data\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.415667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-fernet-keys\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.415759 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwkjz\" (UniqueName: \"kubernetes.io/projected/2a61314f-c3f6-434c-be20-0937b2cba8b9-kube-api-access-hwkjz\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.415990 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-combined-ca-bundle\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.422331 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-config-data\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.422515 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-combined-ca-bundle\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.423255 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-fernet-keys\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.450931 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwkjz\" (UniqueName: \"kubernetes.io/projected/2a61314f-c3f6-434c-be20-0937b2cba8b9-kube-api-access-hwkjz\") pod \"keystone-cron-29490721-mcdhl\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.487685 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:00 crc kubenswrapper[4922]: I0126 16:01:00.955906 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-mcdhl"] Jan 26 16:01:01 crc kubenswrapper[4922]: I0126 16:01:01.138697 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-mcdhl" event={"ID":"2a61314f-c3f6-434c-be20-0937b2cba8b9","Type":"ContainerStarted","Data":"021425a0e98a111efd56b79436763cd58abb347bb9159e8442bffacf2f73174e"} Jan 26 16:01:02 crc kubenswrapper[4922]: I0126 16:01:02.150058 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-mcdhl" event={"ID":"2a61314f-c3f6-434c-be20-0937b2cba8b9","Type":"ContainerStarted","Data":"5272e50809ae7a83dfbc0eca0f4f135f420c5fc856037f551b6edd8c869affd2"} Jan 26 16:01:02 crc kubenswrapper[4922]: I0126 16:01:02.170830 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490721-mcdhl" podStartSLOduration=2.170807166 podStartE2EDuration="2.170807166s" podCreationTimestamp="2026-01-26 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:02.164512805 +0000 UTC m=+6679.366775607" watchObservedRunningTime="2026-01-26 16:01:02.170807166 +0000 UTC m=+6679.373069938" Jan 26 16:01:05 crc kubenswrapper[4922]: I0126 16:01:05.195248 4922 generic.go:334] "Generic (PLEG): container finished" podID="2a61314f-c3f6-434c-be20-0937b2cba8b9" containerID="5272e50809ae7a83dfbc0eca0f4f135f420c5fc856037f551b6edd8c869affd2" exitCode=0 Jan 26 16:01:05 crc kubenswrapper[4922]: I0126 16:01:05.195514 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-mcdhl" event={"ID":"2a61314f-c3f6-434c-be20-0937b2cba8b9","Type":"ContainerDied","Data":"5272e50809ae7a83dfbc0eca0f4f135f420c5fc856037f551b6edd8c869affd2"} Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.600163 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.754120 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-combined-ca-bundle\") pod \"2a61314f-c3f6-434c-be20-0937b2cba8b9\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.754664 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwkjz\" (UniqueName: \"kubernetes.io/projected/2a61314f-c3f6-434c-be20-0937b2cba8b9-kube-api-access-hwkjz\") pod \"2a61314f-c3f6-434c-be20-0937b2cba8b9\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.754817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-fernet-keys\") pod \"2a61314f-c3f6-434c-be20-0937b2cba8b9\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.754985 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-config-data\") pod \"2a61314f-c3f6-434c-be20-0937b2cba8b9\" (UID: \"2a61314f-c3f6-434c-be20-0937b2cba8b9\") " Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.760938 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2a61314f-c3f6-434c-be20-0937b2cba8b9" (UID: "2a61314f-c3f6-434c-be20-0937b2cba8b9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.762365 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a61314f-c3f6-434c-be20-0937b2cba8b9-kube-api-access-hwkjz" (OuterVolumeSpecName: "kube-api-access-hwkjz") pod "2a61314f-c3f6-434c-be20-0937b2cba8b9" (UID: "2a61314f-c3f6-434c-be20-0937b2cba8b9"). InnerVolumeSpecName "kube-api-access-hwkjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.799935 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a61314f-c3f6-434c-be20-0937b2cba8b9" (UID: "2a61314f-c3f6-434c-be20-0937b2cba8b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.835865 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-config-data" (OuterVolumeSpecName: "config-data") pod "2a61314f-c3f6-434c-be20-0937b2cba8b9" (UID: "2a61314f-c3f6-434c-be20-0937b2cba8b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.857668 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.857703 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.857719 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a61314f-c3f6-434c-be20-0937b2cba8b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4922]: I0126 16:01:06.857734 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwkjz\" (UniqueName: \"kubernetes.io/projected/2a61314f-c3f6-434c-be20-0937b2cba8b9-kube-api-access-hwkjz\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:07 crc kubenswrapper[4922]: I0126 16:01:07.092599 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:01:07 crc kubenswrapper[4922]: E0126 16:01:07.092906 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:01:07 crc kubenswrapper[4922]: I0126 16:01:07.235017 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-mcdhl" event={"ID":"2a61314f-c3f6-434c-be20-0937b2cba8b9","Type":"ContainerDied","Data":"021425a0e98a111efd56b79436763cd58abb347bb9159e8442bffacf2f73174e"} Jan 26 16:01:07 crc kubenswrapper[4922]: I0126 16:01:07.235435 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021425a0e98a111efd56b79436763cd58abb347bb9159e8442bffacf2f73174e" Jan 26 16:01:07 crc kubenswrapper[4922]: I0126 16:01:07.235107 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-mcdhl" Jan 26 16:01:20 crc kubenswrapper[4922]: I0126 16:01:20.092889 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:01:20 crc kubenswrapper[4922]: E0126 16:01:20.093705 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:01:20 crc kubenswrapper[4922]: I0126 16:01:20.884789 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdfp9"] Jan 26 16:01:20 crc kubenswrapper[4922]: E0126 16:01:20.885547 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a61314f-c3f6-434c-be20-0937b2cba8b9" containerName="keystone-cron" Jan 26 16:01:20 crc kubenswrapper[4922]: I0126 16:01:20.885564 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a61314f-c3f6-434c-be20-0937b2cba8b9" containerName="keystone-cron" Jan 26 16:01:20 crc kubenswrapper[4922]: I0126 16:01:20.885827 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a61314f-c3f6-434c-be20-0937b2cba8b9" containerName="keystone-cron" Jan 26 16:01:20 crc kubenswrapper[4922]: I0126 16:01:20.889601 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:20 crc kubenswrapper[4922]: I0126 16:01:20.915457 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdfp9"] Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.084669 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtbk\" (UniqueName: \"kubernetes.io/projected/82844a01-8fce-48d2-9466-363a3326e469-kube-api-access-4vtbk\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.085634 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-utilities\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.085689 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-catalog-content\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.187792 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-utilities\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.187856 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-catalog-content\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.187917 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vtbk\" (UniqueName: \"kubernetes.io/projected/82844a01-8fce-48d2-9466-363a3326e469-kube-api-access-4vtbk\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.188353 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-utilities\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.188415 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-catalog-content\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.208934 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vtbk\" (UniqueName: \"kubernetes.io/projected/82844a01-8fce-48d2-9466-363a3326e469-kube-api-access-4vtbk\") pod \"redhat-marketplace-rdfp9\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.211935 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:21 crc kubenswrapper[4922]: I0126 16:01:21.688481 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdfp9"] Jan 26 16:01:22 crc kubenswrapper[4922]: I0126 16:01:22.402485 4922 generic.go:334] "Generic (PLEG): container finished" podID="82844a01-8fce-48d2-9466-363a3326e469" containerID="a6f199547219e754f120aa3a85a5a28eb3d9dd2523adda8e0f205066958a485d" exitCode=0 Jan 26 16:01:22 crc kubenswrapper[4922]: I0126 16:01:22.402835 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerDied","Data":"a6f199547219e754f120aa3a85a5a28eb3d9dd2523adda8e0f205066958a485d"} Jan 26 16:01:22 crc kubenswrapper[4922]: I0126 16:01:22.402869 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerStarted","Data":"e9e55fc4813a3bbc5ef5cdcfaaca234d267c37fb9155dab85fbca5066604ca35"} Jan 26 16:01:23 crc kubenswrapper[4922]: I0126 16:01:23.415269 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerStarted","Data":"1b3ac043cf4d98023010be358bd81e45eda281aacd3b609db2278568462d9f24"} Jan 26 16:01:24 crc kubenswrapper[4922]: I0126 16:01:24.425847 4922 generic.go:334] "Generic (PLEG): container finished" podID="82844a01-8fce-48d2-9466-363a3326e469" containerID="1b3ac043cf4d98023010be358bd81e45eda281aacd3b609db2278568462d9f24" exitCode=0 Jan 26 16:01:24 crc kubenswrapper[4922]: I0126 16:01:24.425898 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerDied","Data":"1b3ac043cf4d98023010be358bd81e45eda281aacd3b609db2278568462d9f24"} Jan 26 16:01:25 crc kubenswrapper[4922]: I0126 16:01:25.439370 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerStarted","Data":"bd2470e1e51174d8e851b2272c5d8ac09374f7d571c48cf5ef2a347886ea3282"} Jan 26 16:01:25 crc kubenswrapper[4922]: I0126 16:01:25.470822 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdfp9" podStartSLOduration=2.936525994 podStartE2EDuration="5.470800463s" podCreationTimestamp="2026-01-26 16:01:20 +0000 UTC" firstStartedPulling="2026-01-26 16:01:22.4054776 +0000 UTC m=+6699.607740372" lastFinishedPulling="2026-01-26 16:01:24.939752069 +0000 UTC m=+6702.142014841" observedRunningTime="2026-01-26 16:01:25.460440971 +0000 UTC m=+6702.662703743" watchObservedRunningTime="2026-01-26 16:01:25.470800463 +0000 UTC m=+6702.673063235" Jan 26 16:01:31 crc kubenswrapper[4922]: I0126 16:01:31.213089 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:31 crc kubenswrapper[4922]: I0126 16:01:31.213554 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:31 crc kubenswrapper[4922]: I0126 16:01:31.261363 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:31 crc kubenswrapper[4922]: I0126 16:01:31.552937 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:35 crc kubenswrapper[4922]: I0126 16:01:35.093057 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:01:35 crc kubenswrapper[4922]: E0126 16:01:35.093415 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.250205 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdfp9"] Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.250812 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdfp9" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="registry-server" containerID="cri-o://bd2470e1e51174d8e851b2272c5d8ac09374f7d571c48cf5ef2a347886ea3282" gracePeriod=2 Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.613725 4922 generic.go:334] "Generic (PLEG): container finished" podID="82844a01-8fce-48d2-9466-363a3326e469" containerID="bd2470e1e51174d8e851b2272c5d8ac09374f7d571c48cf5ef2a347886ea3282" exitCode=0 Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.613776 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerDied","Data":"bd2470e1e51174d8e851b2272c5d8ac09374f7d571c48cf5ef2a347886ea3282"} Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.742261 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.868555 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-catalog-content\") pod \"82844a01-8fce-48d2-9466-363a3326e469\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.868926 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-utilities\") pod \"82844a01-8fce-48d2-9466-363a3326e469\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.869165 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vtbk\" (UniqueName: \"kubernetes.io/projected/82844a01-8fce-48d2-9466-363a3326e469-kube-api-access-4vtbk\") pod \"82844a01-8fce-48d2-9466-363a3326e469\" (UID: \"82844a01-8fce-48d2-9466-363a3326e469\") " Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.871249 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-utilities" (OuterVolumeSpecName: "utilities") pod "82844a01-8fce-48d2-9466-363a3326e469" (UID: "82844a01-8fce-48d2-9466-363a3326e469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.875171 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82844a01-8fce-48d2-9466-363a3326e469-kube-api-access-4vtbk" (OuterVolumeSpecName: "kube-api-access-4vtbk") pod "82844a01-8fce-48d2-9466-363a3326e469" (UID: "82844a01-8fce-48d2-9466-363a3326e469"). InnerVolumeSpecName "kube-api-access-4vtbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.894428 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82844a01-8fce-48d2-9466-363a3326e469" (UID: "82844a01-8fce-48d2-9466-363a3326e469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.971833 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.971882 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82844a01-8fce-48d2-9466-363a3326e469-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4922]: I0126 16:01:38.971897 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vtbk\" (UniqueName: \"kubernetes.io/projected/82844a01-8fce-48d2-9466-363a3326e469-kube-api-access-4vtbk\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.625059 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdfp9" event={"ID":"82844a01-8fce-48d2-9466-363a3326e469","Type":"ContainerDied","Data":"e9e55fc4813a3bbc5ef5cdcfaaca234d267c37fb9155dab85fbca5066604ca35"} Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.625141 4922 scope.go:117] "RemoveContainer" containerID="bd2470e1e51174d8e851b2272c5d8ac09374f7d571c48cf5ef2a347886ea3282" Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.625180 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdfp9" Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.651434 4922 scope.go:117] "RemoveContainer" containerID="1b3ac043cf4d98023010be358bd81e45eda281aacd3b609db2278568462d9f24" Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.656987 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdfp9"] Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.666388 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdfp9"] Jan 26 16:01:39 crc kubenswrapper[4922]: I0126 16:01:39.672277 4922 scope.go:117] "RemoveContainer" containerID="a6f199547219e754f120aa3a85a5a28eb3d9dd2523adda8e0f205066958a485d" Jan 26 16:01:41 crc kubenswrapper[4922]: I0126 16:01:41.114615 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82844a01-8fce-48d2-9466-363a3326e469" path="/var/lib/kubelet/pods/82844a01-8fce-48d2-9466-363a3326e469/volumes" Jan 26 16:01:49 crc kubenswrapper[4922]: I0126 16:01:49.092976 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:01:49 crc kubenswrapper[4922]: E0126 16:01:49.093774 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:02:04 crc kubenswrapper[4922]: I0126 16:02:04.093473 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:02:04 crc kubenswrapper[4922]: E0126 16:02:04.094375 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g5x8j_openshift-machine-config-operator(d729a48f-6c8a-41a2-82f0-336269ebbfc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" Jan 26 16:02:15 crc kubenswrapper[4922]: I0126 16:02:15.099740 4922 scope.go:117] "RemoveContainer" containerID="17c5b41d444c97ee0e4659edb7505c3550ed37823461883cfd2f73d4f6ca693f" Jan 26 16:02:16 crc kubenswrapper[4922]: I0126 16:02:16.005924 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" event={"ID":"d729a48f-6c8a-41a2-82f0-336269ebbfc7","Type":"ContainerStarted","Data":"4e641837671b425a043f0fc8c935d6f104f6374c12ab77968fe47216ca67889d"} Jan 26 16:02:26 crc kubenswrapper[4922]: I0126 16:02:26.100281 4922 generic.go:334] "Generic (PLEG): container finished" podID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerID="d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193" exitCode=0 Jan 26 16:02:26 crc kubenswrapper[4922]: I0126 16:02:26.100335 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ls27x/must-gather-xctz5" event={"ID":"3c8ab648-37d5-445d-89e1-f52381d284e7","Type":"ContainerDied","Data":"d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193"} Jan 26 16:02:26 crc kubenswrapper[4922]: I0126 16:02:26.101532 4922 scope.go:117] "RemoveContainer" containerID="d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193" Jan 26 16:02:26 crc kubenswrapper[4922]: I0126 16:02:26.978400 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ls27x_must-gather-xctz5_3c8ab648-37d5-445d-89e1-f52381d284e7/gather/0.log" Jan 26 16:02:39 crc kubenswrapper[4922]: I0126 16:02:39.406387 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ls27x/must-gather-xctz5"] Jan 26 16:02:39 crc kubenswrapper[4922]: I0126 16:02:39.407179 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ls27x/must-gather-xctz5" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="copy" containerID="cri-o://93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c" gracePeriod=2 Jan 26 16:02:39 crc kubenswrapper[4922]: I0126 16:02:39.416786 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ls27x/must-gather-xctz5"] Jan 26 16:02:39 crc kubenswrapper[4922]: I0126 16:02:39.851408 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ls27x_must-gather-xctz5_3c8ab648-37d5-445d-89e1-f52381d284e7/copy/0.log" Jan 26 16:02:39 crc kubenswrapper[4922]: I0126 16:02:39.851826 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 16:02:39 crc kubenswrapper[4922]: I0126 16:02:39.913713 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c8ab648-37d5-445d-89e1-f52381d284e7-must-gather-output\") pod \"3c8ab648-37d5-445d-89e1-f52381d284e7\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.016052 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54kx5\" (UniqueName: \"kubernetes.io/projected/3c8ab648-37d5-445d-89e1-f52381d284e7-kube-api-access-54kx5\") pod \"3c8ab648-37d5-445d-89e1-f52381d284e7\" (UID: \"3c8ab648-37d5-445d-89e1-f52381d284e7\") " Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.023304 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8ab648-37d5-445d-89e1-f52381d284e7-kube-api-access-54kx5" (OuterVolumeSpecName: "kube-api-access-54kx5") pod "3c8ab648-37d5-445d-89e1-f52381d284e7" (UID: "3c8ab648-37d5-445d-89e1-f52381d284e7"). InnerVolumeSpecName "kube-api-access-54kx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.102509 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8ab648-37d5-445d-89e1-f52381d284e7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3c8ab648-37d5-445d-89e1-f52381d284e7" (UID: "3c8ab648-37d5-445d-89e1-f52381d284e7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.119437 4922 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3c8ab648-37d5-445d-89e1-f52381d284e7-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.119481 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54kx5\" (UniqueName: \"kubernetes.io/projected/3c8ab648-37d5-445d-89e1-f52381d284e7-kube-api-access-54kx5\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.222493 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ls27x_must-gather-xctz5_3c8ab648-37d5-445d-89e1-f52381d284e7/copy/0.log" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.223279 4922 generic.go:334] "Generic (PLEG): container finished" podID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerID="93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c" exitCode=143 Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.223351 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ls27x/must-gather-xctz5" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.223371 4922 scope.go:117] "RemoveContainer" containerID="93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.250409 4922 scope.go:117] "RemoveContainer" containerID="d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.322349 4922 scope.go:117] "RemoveContainer" containerID="93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c" Jan 26 16:02:40 crc kubenswrapper[4922]: E0126 16:02:40.322904 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c\": container with ID starting with 93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c not found: ID does not exist" containerID="93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.322968 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c"} err="failed to get container status \"93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c\": rpc error: code = NotFound desc = could not find container \"93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c\": container with ID starting with 93cc4bc8d15930ef80d47ddadb2ed44d2695b3928291e3d2f04cdf7f4a4da07c not found: ID does not exist" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.323006 4922 scope.go:117] "RemoveContainer" containerID="d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193" Jan 26 16:02:40 crc kubenswrapper[4922]: E0126 16:02:40.323617 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193\": container with ID starting with d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193 not found: ID does not exist" containerID="d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193" Jan 26 16:02:40 crc kubenswrapper[4922]: I0126 16:02:40.323665 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193"} err="failed to get container status \"d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193\": rpc error: code = NotFound desc = could not find container \"d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193\": container with ID starting with d6207eaad5b49cb62bfdb28b2396dc94633d72fa9590c626a195940c557da193 not found: ID does not exist" Jan 26 16:02:41 crc kubenswrapper[4922]: I0126 16:02:41.104235 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" path="/var/lib/kubelet/pods/3c8ab648-37d5-445d-89e1-f52381d284e7/volumes" Jan 26 16:03:22 crc kubenswrapper[4922]: I0126 16:03:22.234362 4922 scope.go:117] "RemoveContainer" containerID="9dfb2d86e634ea0cff70d96d2345a1939c147b51e42a5df6cb4a66eb467c1efa" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.223446 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5fsc6"] Jan 26 16:03:41 crc kubenswrapper[4922]: E0126 16:03:41.225722 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="gather" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.225742 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="gather" Jan 26 16:03:41 crc kubenswrapper[4922]: E0126 16:03:41.225762 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="registry-server" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.225768 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="registry-server" Jan 26 16:03:41 crc kubenswrapper[4922]: E0126 16:03:41.225784 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="copy" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.225790 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="copy" Jan 26 16:03:41 crc kubenswrapper[4922]: E0126 16:03:41.225806 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="extract-utilities" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.225815 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="extract-utilities" Jan 26 16:03:41 crc kubenswrapper[4922]: E0126 16:03:41.225841 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="extract-content" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.225849 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="extract-content" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.226102 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="gather" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.226126 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8ab648-37d5-445d-89e1-f52381d284e7" containerName="copy" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.226163 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="82844a01-8fce-48d2-9466-363a3326e469" containerName="registry-server" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.228626 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.240815 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fsc6"] Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.330234 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-catalog-content\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.330680 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhx6r\" (UniqueName: \"kubernetes.io/projected/295b7a87-1e22-46af-ad03-3b508594a025-kube-api-access-fhx6r\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.330968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-utilities\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.433041 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-catalog-content\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.433685 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhx6r\" (UniqueName: \"kubernetes.io/projected/295b7a87-1e22-46af-ad03-3b508594a025-kube-api-access-fhx6r\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.433734 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-utilities\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.433987 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-catalog-content\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.434174 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-utilities\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.454503 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhx6r\" (UniqueName: \"kubernetes.io/projected/295b7a87-1e22-46af-ad03-3b508594a025-kube-api-access-fhx6r\") pod \"community-operators-5fsc6\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:41 crc kubenswrapper[4922]: I0126 16:03:41.582471 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:42 crc kubenswrapper[4922]: I0126 16:03:42.143233 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fsc6"] Jan 26 16:03:42 crc kubenswrapper[4922]: I0126 16:03:42.877402 4922 generic.go:334] "Generic (PLEG): container finished" podID="295b7a87-1e22-46af-ad03-3b508594a025" containerID="f96a2e1839621beb6f4a34547f8a2fdfb6f14724b33562da9c0dec71185b9eae" exitCode=0 Jan 26 16:03:42 crc kubenswrapper[4922]: I0126 16:03:42.877490 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerDied","Data":"f96a2e1839621beb6f4a34547f8a2fdfb6f14724b33562da9c0dec71185b9eae"} Jan 26 16:03:42 crc kubenswrapper[4922]: I0126 16:03:42.877690 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerStarted","Data":"23ba191b4cc072445578ef502d9e979a34e9c3eb94030626a8036040a9b016f6"} Jan 26 16:03:42 crc kubenswrapper[4922]: I0126 16:03:42.880834 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:03:43 crc kubenswrapper[4922]: I0126 16:03:43.889833 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerStarted","Data":"83c246bc0b7db4b6adfb89009c0948b45dca389c75d98d326f7bd8ae15a9d175"} Jan 26 16:03:44 crc kubenswrapper[4922]: I0126 16:03:44.901552 4922 generic.go:334] "Generic (PLEG): container finished" podID="295b7a87-1e22-46af-ad03-3b508594a025" containerID="83c246bc0b7db4b6adfb89009c0948b45dca389c75d98d326f7bd8ae15a9d175" exitCode=0 Jan 26 16:03:44 crc kubenswrapper[4922]: I0126 16:03:44.901591 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerDied","Data":"83c246bc0b7db4b6adfb89009c0948b45dca389c75d98d326f7bd8ae15a9d175"} Jan 26 16:03:45 crc kubenswrapper[4922]: I0126 16:03:45.913118 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerStarted","Data":"acc99179ac967c14ee6e965060981885952f24ab9902c8f51699b1cc6413ba60"} Jan 26 16:03:45 crc kubenswrapper[4922]: I0126 16:03:45.935021 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5fsc6" podStartSLOduration=2.294724638 podStartE2EDuration="4.934999207s" podCreationTimestamp="2026-01-26 16:03:41 +0000 UTC" firstStartedPulling="2026-01-26 16:03:42.880114993 +0000 UTC m=+6840.082377775" lastFinishedPulling="2026-01-26 16:03:45.520389572 +0000 UTC m=+6842.722652344" observedRunningTime="2026-01-26 16:03:45.93143274 +0000 UTC m=+6843.133695532" watchObservedRunningTime="2026-01-26 16:03:45.934999207 +0000 UTC m=+6843.137261979" Jan 26 16:03:51 crc kubenswrapper[4922]: I0126 16:03:51.583312 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:51 crc kubenswrapper[4922]: I0126 16:03:51.583867 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:51 crc kubenswrapper[4922]: I0126 16:03:51.638903 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:52 crc kubenswrapper[4922]: I0126 16:03:52.022214 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:52 crc kubenswrapper[4922]: I0126 16:03:52.068925 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fsc6"] Jan 26 16:03:53 crc kubenswrapper[4922]: I0126 16:03:53.990972 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5fsc6" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="registry-server" containerID="cri-o://acc99179ac967c14ee6e965060981885952f24ab9902c8f51699b1cc6413ba60" gracePeriod=2 Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.004237 4922 generic.go:334] "Generic (PLEG): container finished" podID="295b7a87-1e22-46af-ad03-3b508594a025" containerID="acc99179ac967c14ee6e965060981885952f24ab9902c8f51699b1cc6413ba60" exitCode=0 Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.004311 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerDied","Data":"acc99179ac967c14ee6e965060981885952f24ab9902c8f51699b1cc6413ba60"} Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.004810 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fsc6" event={"ID":"295b7a87-1e22-46af-ad03-3b508594a025","Type":"ContainerDied","Data":"23ba191b4cc072445578ef502d9e979a34e9c3eb94030626a8036040a9b016f6"} Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.004822 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ba191b4cc072445578ef502d9e979a34e9c3eb94030626a8036040a9b016f6" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.008989 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.049320 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-catalog-content\") pod \"295b7a87-1e22-46af-ad03-3b508594a025\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.049700 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-utilities\") pod \"295b7a87-1e22-46af-ad03-3b508594a025\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.049802 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhx6r\" (UniqueName: \"kubernetes.io/projected/295b7a87-1e22-46af-ad03-3b508594a025-kube-api-access-fhx6r\") pod \"295b7a87-1e22-46af-ad03-3b508594a025\" (UID: \"295b7a87-1e22-46af-ad03-3b508594a025\") " Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.052047 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-utilities" (OuterVolumeSpecName: "utilities") pod "295b7a87-1e22-46af-ad03-3b508594a025" (UID: "295b7a87-1e22-46af-ad03-3b508594a025"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.060210 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295b7a87-1e22-46af-ad03-3b508594a025-kube-api-access-fhx6r" (OuterVolumeSpecName: "kube-api-access-fhx6r") pod "295b7a87-1e22-46af-ad03-3b508594a025" (UID: "295b7a87-1e22-46af-ad03-3b508594a025"). InnerVolumeSpecName "kube-api-access-fhx6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.114137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "295b7a87-1e22-46af-ad03-3b508594a025" (UID: "295b7a87-1e22-46af-ad03-3b508594a025"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.152820 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.152907 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/295b7a87-1e22-46af-ad03-3b508594a025-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:55 crc kubenswrapper[4922]: I0126 16:03:55.152921 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhx6r\" (UniqueName: \"kubernetes.io/projected/295b7a87-1e22-46af-ad03-3b508594a025-kube-api-access-fhx6r\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:56 crc kubenswrapper[4922]: I0126 16:03:56.013170 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fsc6" Jan 26 16:03:56 crc kubenswrapper[4922]: I0126 16:03:56.050030 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fsc6"] Jan 26 16:03:56 crc kubenswrapper[4922]: I0126 16:03:56.059979 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5fsc6"] Jan 26 16:03:57 crc kubenswrapper[4922]: I0126 16:03:57.104019 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295b7a87-1e22-46af-ad03-3b508594a025" path="/var/lib/kubelet/pods/295b7a87-1e22-46af-ad03-3b508594a025/volumes" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.123724 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m42hj"] Jan 26 16:04:29 crc kubenswrapper[4922]: E0126 16:04:29.124961 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="extract-content" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.124976 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="extract-content" Jan 26 16:04:29 crc kubenswrapper[4922]: E0126 16:04:29.124991 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="registry-server" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.124997 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="registry-server" Jan 26 16:04:29 crc kubenswrapper[4922]: E0126 16:04:29.125014 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="extract-utilities" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.125023 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="extract-utilities" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.125423 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="295b7a87-1e22-46af-ad03-3b508594a025" containerName="registry-server" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.127188 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.139336 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m42hj"] Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.220243 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-utilities\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.220644 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-catalog-content\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.220728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmg7\" (UniqueName: \"kubernetes.io/projected/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-kube-api-access-vsmg7\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.322921 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-catalog-content\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.322989 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmg7\" (UniqueName: \"kubernetes.io/projected/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-kube-api-access-vsmg7\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.323041 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-utilities\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.323546 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-catalog-content\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.323580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-utilities\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.349868 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmg7\" (UniqueName: \"kubernetes.io/projected/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-kube-api-access-vsmg7\") pod \"certified-operators-m42hj\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:29 crc kubenswrapper[4922]: I0126 16:04:29.456621 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:30 crc kubenswrapper[4922]: I0126 16:04:30.015258 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m42hj"] Jan 26 16:04:30 crc kubenswrapper[4922]: I0126 16:04:30.358583 4922 generic.go:334] "Generic (PLEG): container finished" podID="c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" containerID="25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114" exitCode=0 Jan 26 16:04:30 crc kubenswrapper[4922]: I0126 16:04:30.358632 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42hj" event={"ID":"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333","Type":"ContainerDied","Data":"25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114"} Jan 26 16:04:30 crc kubenswrapper[4922]: I0126 16:04:30.358661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42hj" event={"ID":"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333","Type":"ContainerStarted","Data":"ef0c2be724ff2d0243acbefe8db42e92e9aebf3e72d1d97ce789284ad9c0bdda"} Jan 26 16:04:32 crc kubenswrapper[4922]: I0126 16:04:32.404756 4922 generic.go:334] "Generic (PLEG): container finished" podID="c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" containerID="cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101" exitCode=0 Jan 26 16:04:32 crc kubenswrapper[4922]: I0126 16:04:32.404807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42hj" event={"ID":"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333","Type":"ContainerDied","Data":"cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101"} Jan 26 16:04:33 crc kubenswrapper[4922]: I0126 16:04:33.417403 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42hj" event={"ID":"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333","Type":"ContainerStarted","Data":"4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3"} Jan 26 16:04:33 crc kubenswrapper[4922]: I0126 16:04:33.447465 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m42hj" podStartSLOduration=1.9295613280000001 podStartE2EDuration="4.44744661s" podCreationTimestamp="2026-01-26 16:04:29 +0000 UTC" firstStartedPulling="2026-01-26 16:04:30.361308586 +0000 UTC m=+6887.563571358" lastFinishedPulling="2026-01-26 16:04:32.879193868 +0000 UTC m=+6890.081456640" observedRunningTime="2026-01-26 16:04:33.43931453 +0000 UTC m=+6890.641577312" watchObservedRunningTime="2026-01-26 16:04:33.44744661 +0000 UTC m=+6890.649709382" Jan 26 16:04:39 crc kubenswrapper[4922]: I0126 16:04:39.456934 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:39 crc kubenswrapper[4922]: I0126 16:04:39.457603 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:39 crc kubenswrapper[4922]: I0126 16:04:39.515214 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:39 crc kubenswrapper[4922]: I0126 16:04:39.565395 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:39 crc kubenswrapper[4922]: I0126 16:04:39.750577 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m42hj"] Jan 26 16:04:41 crc kubenswrapper[4922]: I0126 16:04:41.307297 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:04:41 crc kubenswrapper[4922]: I0126 16:04:41.307617 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:04:41 crc kubenswrapper[4922]: I0126 16:04:41.500526 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m42hj" podUID="c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" containerName="registry-server" containerID="cri-o://4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3" gracePeriod=2 Jan 26 16:04:41 crc kubenswrapper[4922]: I0126 16:04:41.978946 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.123260 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-catalog-content\") pod \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.123411 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-utilities\") pod \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.123447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsmg7\" (UniqueName: \"kubernetes.io/projected/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-kube-api-access-vsmg7\") pod \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\" (UID: \"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333\") " Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.125051 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-utilities" (OuterVolumeSpecName: "utilities") pod "c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" (UID: "c8b85fbc-9b2e-4d6c-bf13-69d7e685c333"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.133102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-kube-api-access-vsmg7" (OuterVolumeSpecName: "kube-api-access-vsmg7") pod "c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" (UID: "c8b85fbc-9b2e-4d6c-bf13-69d7e685c333"). InnerVolumeSpecName "kube-api-access-vsmg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.228325 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.228590 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsmg7\" (UniqueName: \"kubernetes.io/projected/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-kube-api-access-vsmg7\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.329480 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" (UID: "c8b85fbc-9b2e-4d6c-bf13-69d7e685c333"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.330904 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.515087 4922 generic.go:334] "Generic (PLEG): container finished" podID="c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" containerID="4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3" exitCode=0 Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.515116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42hj" event={"ID":"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333","Type":"ContainerDied","Data":"4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3"} Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.515774 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42hj" event={"ID":"c8b85fbc-9b2e-4d6c-bf13-69d7e685c333","Type":"ContainerDied","Data":"ef0c2be724ff2d0243acbefe8db42e92e9aebf3e72d1d97ce789284ad9c0bdda"} Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.515150 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42hj" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.515833 4922 scope.go:117] "RemoveContainer" containerID="4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.538566 4922 scope.go:117] "RemoveContainer" containerID="cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.556689 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m42hj"] Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.565691 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m42hj"] Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.588412 4922 scope.go:117] "RemoveContainer" containerID="25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.626664 4922 scope.go:117] "RemoveContainer" containerID="4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3" Jan 26 16:04:42 crc kubenswrapper[4922]: E0126 16:04:42.627261 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3\": container with ID starting with 4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3 not found: ID does not exist" containerID="4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.627399 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3"} err="failed to get container status \"4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3\": rpc error: code = NotFound desc = could not find container \"4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3\": container with ID starting with 4d10dd04aa8d9bef09981ca89e59675902a5e0c6f3fc6619e21f0ec04f77eef3 not found: ID does not exist" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.627515 4922 scope.go:117] "RemoveContainer" containerID="cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101" Jan 26 16:04:42 crc kubenswrapper[4922]: E0126 16:04:42.628175 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101\": container with ID starting with cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101 not found: ID does not exist" containerID="cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.628232 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101"} err="failed to get container status \"cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101\": rpc error: code = NotFound desc = could not find container \"cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101\": container with ID starting with cacdb1a275fdd367d2856537fb303269b2702a2327a4b659f69ae920e569c101 not found: ID does not exist" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.628273 4922 scope.go:117] "RemoveContainer" containerID="25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114" Jan 26 16:04:42 crc kubenswrapper[4922]: E0126 16:04:42.628610 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114\": container with ID starting with 25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114 not found: ID does not exist" containerID="25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114" Jan 26 16:04:42 crc kubenswrapper[4922]: I0126 16:04:42.628643 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114"} err="failed to get container status \"25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114\": rpc error: code = NotFound desc = could not find container \"25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114\": container with ID starting with 25b134be69e7dd9394b8bbd3a0052caf2ea678dba98c03f827785404e77f6114 not found: ID does not exist" Jan 26 16:04:43 crc kubenswrapper[4922]: I0126 16:04:43.108414 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b85fbc-9b2e-4d6c-bf13-69d7e685c333" path="/var/lib/kubelet/pods/c8b85fbc-9b2e-4d6c-bf13-69d7e685c333/volumes" Jan 26 16:05:11 crc kubenswrapper[4922]: I0126 16:05:11.307025 4922 patch_prober.go:28] interesting pod/machine-config-daemon-g5x8j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:05:11 crc kubenswrapper[4922]: I0126 16:05:11.307661 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g5x8j" podUID="d729a48f-6c8a-41a2-82f0-336269ebbfc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"